Discussion:
Alan Kay talk at HPI in Potsdam
(too old to reply)
Ian Piumarta
2011-07-08 23:32:21 UTC
Permalink
Title: Next steps for qualitatively improving programming

Venue: Lecture Hall 1, Hasso-Plattner-Institut Potsdam, Germany

Date and time: July 21 (Thu) 2011, 16:00-17:00

Additional information:

http://www.vpri.org/html/people/founders.htm
http://www.hpi.uni-potsdam.de/hpi/anfahrt?L=1
http://www.hpi.uni-potsdam.de/news/beitrag/computerpionier-alan-kay-wird-hpi-fellow.html

This talk will be recorded and made available online.
John Zabroski
2011-07-22 00:47:15 UTC
Permalink
Ian,

When will the recording be online?

Please let us know!

Thanks,
Z-Bo
Post by Ian Piumarta
Title: Next steps for qualitatively improving programming
Venue: Lecture Hall 1, Hasso-Plattner-Institut Potsdam, Germany
Date and time: July 21 (Thu) 2011, 16:00-17:00
http://www.vpri.org/html/people/founders.htm
http://www.hpi.uni-potsdam.de/hpi/anfahrt?L=1
http://www.hpi.uni-potsdam.de/news/beitrag/computerpionier-alan-kay-wird-hpi-fellow.html
This talk will be recorded and made available online.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Alan Kay
2011-07-22 04:29:57 UTC
Permalink
To All,

This wound up being a talk to several hundred students, so most of the content
is about "ways to think about things", with just a little about scaling and
STEPS at the end.

Cheers,

Alan




________________________________
From: John Zabroski <***@gmail.com>
To: Fundamentals of New Computing <***@vpri.org>
Sent: Thu, July 21, 2011 5:47:15 PM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam

Ian,

When will the recording be online?

Please let us know!

Thanks,
Z-Bo


On Fri, Jul 8, 2011 at 7:32 PM, Ian Piumarta <***@vpri.org> wrote:

Title: Next steps for qualitatively improving programming
Post by Ian Piumarta
Venue: Lecture Hall 1, Hasso-Plattner-Institut Potsdam, Germany
Date and time: July 21 (Thu) 2011, 16:00-17:00
http://www.vpri.org/html/people/founders.htm
http://www.hpi.uni-potsdam.de/hpi/anfahrt?L=1
http://www.hpi.uni-potsdam.de/news/beitrag/computerpionier-alan-kay-wird-hpi-fellow.html
This talk will be recorded and made available online.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Bert Freudenberg
2011-07-22 12:03:09 UTC
Permalink
Recording:

http://tele-task.de/archive/lecture/overview/5820/

- Bert -
Post by Alan Kay
To All,
This wound up being a talk to several hundred students, so most of the content is about "ways to think about things", with just a little about scaling and STEPS at the end.
Cheers,
Alan
Post by Alan Kay
Sent: Thu, July 21, 2011 5:47:15 PM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam
Ian,
When will the recording be online?
Please let us know!
Thanks,
Z-Bo
Post by Ian Piumarta
Title: Next steps for qualitatively improving programming
Venue: Lecture Hall 1, Hasso-Plattner-Institut Potsdam, Germany
Date and time: July 21 (Thu) 2011, 16:00-17:00
http://www.vpri.org/html/people/founders.htm
http://www.hpi.uni-potsdam.de/hpi/anfahrt?L=1
http://www.hpi.uni-potsdam.de/news/beitrag/computerpionier-alan-kay-wird-hpi-fellow.html
This talk will be recorded and made available online.
DeNigris Sean
2011-07-22 12:57:44 UTC
Permalink
Bert, that link was to a 2 minute clip of Alan receiving an award.

Sean
Post by Bert Freudenberg
http://tele-task.de/archive/lecture/overview/5820/
- Bert -
Post by Alan Kay
To All,
This wound up being a talk to several hundred students, so most of the content is about "ways to think about things", with just a little about scaling and STEPS at the end.
Cheers,
Alan
Post by Alan Kay
Sent: Thu, July 21, 2011 5:47:15 PM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam
Ian,
When will the recording be online?
Please let us know!
Thanks,
Z-Bo
Post by Ian Piumarta
Title: Next steps for qualitatively improving programming
Venue: Lecture Hall 1, Hasso-Plattner-Institut Potsdam, Germany
Date and time: July 21 (Thu) 2011, 16:00-17:00
http://www.vpri.org/html/people/founders.htm
http://www.hpi.uni-potsdam.de/hpi/anfahrt?L=1
http://www.hpi.uni-potsdam.de/news/beitrag/computerpionier-alan-kay-wird-hpi-fellow.html
This talk will be recorded and made available online.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Matthias Berth
2011-07-22 13:01:39 UTC
Permalink
Alan Kay: Programming and Scaling

http://tele-task.de/archive/lecture/overview/5819/
Post by DeNigris Sean
Bert, that link was to a 2 minute clip of Alan receiving an award.
Sean
      http://tele-task.de/archive/lecture/overview/5820/
- Bert -
Post by Alan Kay
To All,
This wound up being a talk to several hundred students, so most of the content is about "ways to think about things", with just a little about scaling and STEPS at the end.
Cheers,
Alan
Post by Alan Kay
Sent: Thu, July 21, 2011 5:47:15 PM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam
Ian,
When will the recording be online?
Please let us know!
Thanks,
Z-Bo
Post by Ian Piumarta
Title: Next steps for qualitatively improving programming
Venue: Lecture Hall 1, Hasso-Plattner-Institut Potsdam, Germany
Date and time: July 21 (Thu) 2011, 16:00-17:00
http://www.vpri.org/html/people/founders.htm
http://www.hpi.uni-potsdam.de/hpi/anfahrt?L=1
http://www.hpi.uni-potsdam.de/news/beitrag/computerpionier-alan-kay-wird-hpi-fellow.html
This talk will be recorded and made available online.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Hans-Martin Mosner
2011-07-22 20:38:26 UTC
Permalink
Post by Matthias Berth
Alan Kay: Programming and Scaling
http://tele-task.de/archive/lecture/overview/5819/
Thanks for making this available! With some mplayer trickery, I was even able to watch and listen to it under linux...
Great talks like this always encourage to think out of the box and to stop taking many things for granted.

Cheers,
Hans-Martin
Kim Rose
2011-07-22 18:15:46 UTC
Permalink
thanks, Bert! Hope all is going well and you're having fun.
See you soon in Lancaster....
cheers,
Kim
Post by Bert Freudenberg
http://tele-task.de/archive/lecture/overview/5820/
- Bert -
Post by Alan Kay
To All,
This wound up being a talk to several hundred students, so most of
the content is about "ways to think about things", with just a
little about scaling and STEPS at the end.
Cheers,
Alan
Post by Alan Kay
Sent: Thu, July 21, 2011 5:47:15 PM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam
Ian,
When will the recording be online?
Please let us know!
Thanks,
Z-Bo
Post by Ian Piumarta
Title: Next steps for qualitatively improving programming
Venue: Lecture Hall 1, Hasso-Plattner-Institut Potsdam, Germany
Date and time: July 21 (Thu) 2011, 16:00-17:00
http://www.vpri.org/html/people/founders.htm
http://www.hpi.uni-potsdam.de/hpi/anfahrt?L=1
http://www.hpi.uni-potsdam.de/news/beitrag/computerpionier-alan-kay-wird-hpi-fellow.html
This talk will be recorded and made available online.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Marcel Weiher
2011-07-24 12:39:26 UTC
Permalink
Hi Alan,

as usual, it was inspiring talking to your colleagues and hearing you speak at Potsdam. I think I finally got the Model-T image, which resonated with my fondness for Objective-C: a language that a 17 year old with no experience with compilers or runtimes can implement and that manages to boil down dynamic OO/messaging to a single special function can't be all bad :-)

There was one question I had on the scaling issue that would not have fitted in the Q&A: while praising the design of the Internet, you spoke less well of the World Wide Web, which surprised me a bit. Can you elaborate?

Thanks,

Marcel
Post by Alan Kay
To All,
This wound up being a talk to several hundred students, so most of the content is about "ways to think about things", with just a little about scaling and STEPS at the end.
Cheers,
Alan
Merik Voswinkel
2011-07-24 17:16:12 UTC
Permalink
Post by Marcel Weiher
There was one question I had on the scaling issue that would not
have fitted in the Q&A: while praising the design of the Internet,
you spoke less well of the World Wide Web, which surprised me a
bit. Can you elaborate?
Marcel
Marcel,

Dr Alan Kay addressed the World Wide Web design a number of times in
Post by Marcel Weiher
The main features of the Alto were a terrific combination of speed,
parsimony, and architecture.
-- Speed came from bipolar transistors. It had a 150ns
microinstruction time.
-- Parsimony allowed these to be economic enough for a 1972 personal
computer/workstation (we eventually built almost 2000 of these).
-- Architecture allowed it to be very flexible without sacrificing
speed. To just mention one great idea: it had 16 "zero-overhead"
program counters and separate logic to decide which one would be
used for the next microinstruction -- this allowed bottom level
"virtual multicore" multitasking for system functions (running the
display, disk, handling I/O, painting the screen, emulating VHLLs,
etc. (The Lincoln Labs TX-2 on which Sketchpad was done, also had
multiple program counters, etc.)
So an Alto-2 exercise should try to think through the issues of
speed, parsimony and architecture in today's world of possibilities!
Cheers,
Alan
[1] Alan Kay, How Complex is "Personal Computing"?". Normal"
Considered Harmful. October 22, 2009, Computer Science department at
UIUC.
http://media.cs.uiuc.edu/seminars/StateFarm-Kay-2009-10-22b.asx
(also see http://www.smalltalk.org.br/movies/ )

[2] Alan Kay, "The Computer Revolution Hasn't Happened Yet", October
7, 1997, OOPSLA'97 Keynote.
Transcript http://blog.moryton.net/2007/12/computer-revolution-hasnt-happened-yet.html
Video http://ftp.squeak.org/Media/AlanKay/Alan%20Kay%20at%20OOPSLA%201997%20-%20The%20computer%20revolution%20hasnt%20happened%20yet.avi
(also see http://www.smalltalk.org.br/movies/ )



Merik Voswinkel
Alan Kay
2011-07-24 17:24:20 UTC
Permalink
Hi Marcel

I think I've already said a bit about the Web on this list -- mostly about the
complete misunderstanding of the situation the web and browser designers had.


All the systems principles needed for a good design were already extant, but I
don't think they were known to the designers, even though many of them were
embedded in the actual computers and operating systems they used.

The simplest way to see what I'm talking about is to notice the many-many things
that could be done on a personal computer/workstation that couldn't be done in
the web & browser running on the very same personal computer/workstation. There
was never any good reason for these differences.

Another way to look at this is from the point of view of "separation of
concerns". A big question in any system is "how much does 'Part A' have to know
about 'Part B' (and vice versa) in order to make things happen?" The web and
browser designs fail on this really badly, and have forced set after set of weak
conventions into larger and larger, but still weak browsers and, worse, onto
zillions of web pages on the net.


Basically, one of the main parts of good systems design is to try to find ways
to finesse safe actions without having to know much. So -- for example -- Squeak
runs everywhere because it can carry all of its own resources with it, and the
OS processes/address-spaces allow it to run safely, but do not have to know
anything about Squeak to run it. Similarly Squeak does not have to know much to
run on every machine - just how to get events, a display buffer, and to map its
file conventions onto the local ones. On a bare machine, Squeak *is* the OS,
etc. So much for old ideas from the 70s!

The main idea here is that a windowing 2.5 D UI can compose views from many
sources into a "page". The sources can be opaque because they can even do their
own rendering if needed. Since the sources can run in protected address-spaces
their actions can be confined, and "we" the mini-OS running all this do not have
to know anything about them. This is how apps work on personal computers, and
there is no reason why things shouldn't work this way when the address-spaces
come from other parts of the net. There would then be no difference between
"local" and "global" apps.

Since parts of the address spaces can be externalized, indexing as rich (and
richer) to what we have now still can be done.

And so forth.

The Native Client part of Chrome finally allows what should have been done in
the first place (we are now about 20+ years after the first web proposals by
Berners-Lee). However, this approach will need to be adopted by most of the
already existing multiple browsers before it can really be used in a practical
way in the world of personal computing -- and there are signs that there is not
a lot of agreement or understanding why this would be a good thing.


The sad and odd thing is that so many people in the computer field were so
lacking in "systems consciousness" that they couldn't see this, and failed to
complain mightily as the web was being set up and a really painful genii was
being let out of the bottle.

As Kurt Vonnegut used to say "And so it goes".

Cheers,

Alan



________________________________
From: Marcel Weiher <***@gmail.com>
To: Fundamentals of New Computing <***@vpri.org>
Cc: Alan Kay <***@yahoo.com>
Sent: Sun, July 24, 2011 5:39:26 AM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam


Hi Alan,

as usual, it was inspiring talking to your colleagues and hearing you speak at
Potsdam. I think I finally got the Model-T image, which resonated with my
fondness for Objective-C: a language that a 17 year old with no experience with
compilers or runtimes can implement and that manages to boil down dynamic
OO/messaging to a single special function can't be all bad :-)

There was one question I had on the scaling issue that would not have fitted in
the Q&A: while praising the design of the Internet, you spoke less well of the
World Wide Web, which surprised me a bit. Can you elaborate?

Thanks,

Marcel



On Jul 22, 2011, at 6:29 , Alan Kay wrote:

To All,
Post by Alan Kay
This wound up being a talk to several hundred students, so most of the content
is about "ways to think about things", with just a little about scaling and
STEPS at the end.
Cheers,
Alan
Thiago Silva
2011-07-24 20:41:33 UTC
Permalink
Hello Dr. Alan,

Since access to fonc list archives is closed to members, would you allow me to
publish your email below elsewhere for public access? It is the most rich and
informative critique I've found about the web (plus the non-authoring nature
of the browser you've mentioned before).

Cheers,
Thiago
Post by Alan Kay
Hi Marcel
I think I've already said a bit about the Web on this list -- mostly about
the complete misunderstanding of the situation the web and browser
designers had.
All the systems principles needed for a good design were already extant,
but I don't think they were known to the designers, even though many of
them were embedded in the actual computers and operating systems they
used.
The simplest way to see what I'm talking about is to notice the many-many
things that could be done on a personal computer/workstation that couldn't
be done in the web & browser running on the very same personal
computer/workstation. There was never any good reason for these
differences.
Another way to look at this is from the point of view of "separation of
concerns". A big question in any system is "how much does 'Part A' have to
know about 'Part B' (and vice versa) in order to make things happen?" The
web and browser designs fail on this really badly, and have forced set
after set of weak conventions into larger and larger, but still weak
browsers and, worse, onto zillions of web pages on the net.
Basically, one of the main parts of good systems design is to try to find
ways to finesse safe actions without having to know much. So -- for
example -- Squeak runs everywhere because it can carry all of its own
resources with it, and the OS processes/address-spaces allow it to run
safely, but do not have to know anything about Squeak to run it. Similarly
Squeak does not have to know much to run on every machine - just how to
get events, a display buffer, and to map its file conventions onto the
local ones. On a bare machine, Squeak *is* the OS, etc. So much for old
ideas from the 70s!
The main idea here is that a windowing 2.5 D UI can compose views from many
sources into a "page". The sources can be opaque because they can even do
their own rendering if needed. Since the sources can run in protected
address-spaces their actions can be confined, and "we" the mini-OS running
all this do not have to know anything about them. This is how apps work on
personal computers, and there is no reason why things shouldn't work this
way when the address-spaces come from other parts of the net. There would
then be no difference between "local" and "global" apps.
Since parts of the address spaces can be externalized, indexing as rich
(and richer) to what we have now still can be done.
And so forth.
The Native Client part of Chrome finally allows what should have been done
in the first place (we are now about 20+ years after the first web
proposals by Berners-Lee). However, this approach will need to be adopted
by most of the already existing multiple browsers before it can really be
used in a practical way in the world of personal computing -- and there
are signs that there is not a lot of agreement or understanding why this
would be a good thing.
The sad and odd thing is that so many people in the computer field were so
lacking in "systems consciousness" that they couldn't see this, and failed
to complain mightily as the web was being set up and a really painful
genii was being let out of the bottle.
As Kurt Vonnegut used to say "And so it goes".
Cheers,
Alan
________________________________
Sent: Sun, July 24, 2011 5:39:26 AM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam
Hi Alan,
as usual, it was inspiring talking to your colleagues and hearing you speak
at Potsdam. I think I finally got the Model-T image, which resonated with
my fondness for Objective-C: a language that a 17 year old with no
experience with compilers or runtimes can implement and that manages to
boil down dynamic OO/messaging to a single special function can't be all
bad :-)
There was one question I had on the scaling issue that would not have
fitted in the Q&A: while praising the design of the Internet, you spoke
less well of the World Wide Web, which surprised me a bit. Can you
elaborate?
Thanks,
Marcel
To All,
Post by Alan Kay
This wound up being a talk to several hundred students, so most of the
content is about "ways to think about things", with just a little about
scaling and STEPS at the end.
Cheers,
Alan
Alan Kay
2011-07-25 07:47:59 UTC
Permalink
Hi Thiago

To me, there is not nearly enough context to publish this outside this list. I
like arguments and complaints that are well supported. I don't like the all too
general practice on the web of "mere opinions" about any and all things.

One of the most interesting aspects to me about the reactions to the web is that
the glaring mistakes in systems design from the very beginning were hardly
noticed and complained about. The mess that constitutes the current so-called
"standards" is astounding -- and worse -- is hugely inconvenient and blocks any
number of things that are part of personal computing.

When we did Squeak ca 1996 this was not such a problem because one could
generally provide executable plugins and helpers that would allow getting around
the problems of the browsers and web. This is still possible, except that more
and more since then, many SysAdmins in important destinations for software --
such as school districts and many companies -- will not allow anyone to download
an executable plugin when needed. This is largely because they fear that these
cannot be run safely by their MS OS.

This means that what can be done in the browser by combinations of the standard
tools -- especially JavaScript -- now becomes mission critical.


For example, some of our next version of Etoys for children could be done in JS,
but not all -- e.g. the Kedama massively parallel programmable particle system
made by Yoshiki cannot be implemented to run fast enough in JS. It needs
something much faster and lower level -- and this something has not existed
until the Chrome native client (and this only in Chrome which is only about 11%
penetrated).


So today there is no general solution for this intolerable situation. We've got
Microsoft unable to make a trusted OS, so the SysAdmins ban executables. And
we've got the unsophisticated browser and web folks who don't understand
operating systems at all. And this on machines whose CPUs have address space
protection built in and could easily run many such computations completely
safely! Yikes! Where are we? In some Danteish "9th Circle of Fumbling"?

Cheers,

Alan



________________________________
From: Thiago Silva <***@sourcecraft.info>
To: Alan Kay <***@yahoo.com>; Fundamentals of New Computing
<***@vpri.org>
Sent: Sun, July 24, 2011 1:41:33 PM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam

Hello Dr. Alan,

Since access to fonc list archives is closed to members, would you allow me to
publish your email below elsewhere for public access? It is the most rich and
informative critique I've found about the web (plus the non-authoring nature
of the browser you've mentioned before).

Cheers,
Thiago
Post by Alan Kay
Hi Marcel
I think I've already said a bit about the Web on this list -- mostly about
the complete misunderstanding of the situation the web and browser
designers had.
All the systems principles needed for a good design were already extant,
but I don't think they were known to the designers, even though many of
them were embedded in the actual computers and operating systems they
used.
The simplest way to see what I'm talking about is to notice the many-many
things that could be done on a personal computer/workstation that couldn't
be done in the web & browser running on the very same personal
computer/workstation. There was never any good reason for these
differences.
Another way to look at this is from the point of view of "separation of
concerns". A big question in any system is "how much does 'Part A' have to
know about 'Part B' (and vice versa) in order to make things happen?" The
web and browser designs fail on this really badly, and have forced set
after set of weak conventions into larger and larger, but still weak
browsers and, worse, onto zillions of web pages on the net.
Basically, one of the main parts of good systems design is to try to find
ways to finesse safe actions without having to know much. So -- for
example -- Squeak runs everywhere because it can carry all of its own
resources with it, and the OS processes/address-spaces allow it to run
safely, but do not have to know anything about Squeak to run it. Similarly
Squeak does not have to know much to run on every machine - just how to
get events, a display buffer, and to map its file conventions onto the
local ones. On a bare machine, Squeak *is* the OS, etc. So much for old
ideas from the 70s!
The main idea here is that a windowing 2.5 D UI can compose views from many
sources into a "page". The sources can be opaque because they can even do
their own rendering if needed. Since the sources can run in protected
address-spaces their actions can be confined, and "we" the mini-OS running
all this do not have to know anything about them. This is how apps work on
personal computers, and there is no reason why things shouldn't work this
way when the address-spaces come from other parts of the net. There would
then be no difference between "local" and "global" apps.
Since parts of the address spaces can be externalized, indexing as rich
(and richer) to what we have now still can be done.
And so forth.
The Native Client part of Chrome finally allows what should have been done
in the first place (we are now about 20+ years after the first web
proposals by Berners-Lee). However, this approach will need to be adopted
by most of the already existing multiple browsers before it can really be
used in a practical way in the world of personal computing -- and there
are signs that there is not a lot of agreement or understanding why this
would be a good thing.
The sad and odd thing is that so many people in the computer field were so
lacking in "systems consciousness" that they couldn't see this, and failed
to complain mightily as the web was being set up and a really painful
genii was being let out of the bottle.
As Kurt Vonnegut used to say "And so it goes".
Cheers,
Alan
________________________________
Sent: Sun, July 24, 2011 5:39:26 AM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam
Hi Alan,
as usual, it was inspiring talking to your colleagues and hearing you speak
at Potsdam. I think I finally got the Model-T image, which resonated with
my fondness for Objective-C: a language that a 17 year old with no
experience with compilers or runtimes can implement and that manages to
boil down dynamic OO/messaging to a single special function can't be all
bad :-)
There was one question I had on the scaling issue that would not have
fitted in the Q&A: while praising the design of the Internet, you spoke
less well of the World Wide Web, which surprised me a bit. Can you
elaborate?
Thanks,
Marcel
To All,
Post by Alan Kay
This wound up being a talk to several hundred students, so most of the
content is about "ways to think about things", with just a little about
scaling and STEPS at the end.
Cheers,
Alan
Igor Stasenko
2011-07-25 14:03:57 UTC
Permalink
Post by Alan Kay
Hi Thiago
To me, there is not nearly enough context to publish this outside this list.
I like arguments and complaints that are well supported. I don't like the
all too general practice on the web of "mere opinions" about any and all
things.
One of the most interesting aspects to me about the reactions to the web is
that the glaring mistakes in systems design from the very beginning were
hardly noticed and complained about. The mess that constitutes the current
so-called "standards" is astounding -- and worse -- is hugely inconvenient
and blocks any number of things that are part of personal computing.
When we did Squeak ca 1996 this was not such a problem because one could
generally provide executable plugins and helpers that would allow getting
around the problems of the browsers and web. This is still possible, except
that more and more since then, many SysAdmins in important destinations for
software -- such as school districts and many companies -- will not allow
anyone to download an executable plugin when needed. This is largely because
they fear that these cannot be run safely by their MS OS.
I think that there is only one successful example of browser plugin
which earned enough trust from "sysadmins"
to make it installed - Flash.
If you watch it's evolution over the years, you can see, it evolved
from a quite simple graphics and animation "addon"
into a full-fledged ecosystem, which Squeak and smalltalk has from the
very beginning.

It is really a pity, that we have a systems which has no trust from users side.

Interestingly that many today's trendy and popular things (which we
know today as web) were invented as a temporary solution without any
systematic approach.
I think that i will be right by saying that most of these technologies
(like PHP, Javascript, Ruby, Sendmail etc) is a result of random
choice instead making planning and a deep study of problem field
before doing anything.
And that's why no surprise, they are failing to grow.
And now, people trying to fill the gaps in those technologies with
security, scalability and so on.. Because they are now well
established standards.. while originally was not meant to be used in
such form from the beginning.

In contrast, as you mentioned, TCP/IP protocol which is backbone of
today's internet having much better design.
But i think this is a general problem of software evolution. No matter
how hard you try, you cannot foresee all kinds of interactions,
features and use cases for your system, when you designing it from the
beginning.
Because 20 years ago, systems has completely different requirements,
comparing to today's ones. So, what was good enough 20 years ago,
today is not very good.
And here the problem: is hard to radically change the software,
especially core concepts, because everyone using it, get used to it ,
because it made standard.
So you have to maintain compatibility and invent workarounds , patches
and fixes on top of existing things, rather than radically change the
landscape.
Post by Alan Kay
This means that what can be done in the browser by combinations of the
standard tools -- especially JavaScript -- now becomes mission critical.
For example, some of our next version of Etoys for children could be done in
JS, but not all -- e.g. the Kedama massively parallel programmable particle
system made by Yoshiki cannot be implemented to run fast enough in JS. It
needs something much faster and lower level -- and this something has not
existed until the Chrome native client (and this only in Chrome which is
only about 11% penetrated).
So today there is no general solution for this intolerable situation. We've
got Microsoft unable to make a trusted OS, so the SysAdmins ban executables.
And we've got the unsophisticated browser and web folks who don't understand
operating systems at all. And this on machines whose CPUs have address space
protection built in and could easily run many such computations completely
safely! Yikes! Where are we? In some Danteish "9th Circle of Fumbling"?
Cheers,
Alan
________________________________
Sent: Sun, July 24, 2011 1:41:33 PM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam
Hello Dr. Alan,
Since access to fonc list archives is closed to members, would you allow me to
publish your email below elsewhere for public access? It is the most rich and
informative critique I've found about the web (plus the non-authoring nature
of the browser you've mentioned before).
Cheers,
Thiago
Post by Alan Kay
Hi Marcel
I think I've already said a bit about the Web on this list -- mostly about
the complete misunderstanding of the situation the web and browser
designers had.
All the systems principles needed for a good design were already extant,
but I don't think they were known to the designers, even though many of
them were embedded in the actual computers and operating systems they
used.
The simplest way to see what I'm talking about is to notice the many-many
things that could be done on a personal computer/workstation that couldn't
be done in the web & browser running on the very same personal
computer/workstation. There was never any good reason for these
differences.
Another way to look at this is from the point of view of "separation of
concerns". A big question in any system is "how much does 'Part A' have to
know about 'Part B' (and vice versa) in order to make things happen?" The
web and browser designs fail on this really badly, and have forced set
after set of weak conventions into larger and larger, but still weak
browsers and, worse, onto zillions of web pages on the net.
Basically, one of the main parts of good systems design is to try to find
ways to finesse safe actions without having to know much. So -- for
example -- Squeak runs everywhere because it can carry all of its own
resources with it, and the OS processes/address-spaces allow it to run
safely, but do not have to know anything about Squeak to run it. Similarly
Squeak does not have to know much to run on every machine - just how to
get events, a display buffer, and to map its file conventions onto the
local ones. On a bare machine, Squeak *is* the OS, etc. So much for old
ideas from the 70s!
The main idea here is that a windowing 2.5 D UI can compose views from many
sources into a "page". The sources can be opaque because they can even do
their own rendering if needed. Since the sources can run in protected
address-spaces their actions can be confined, and "we" the mini-OS running
all this do not have to know anything about them. This is how apps work on
personal computers, and there is no reason why things shouldn't work this
way when the address-spaces come from other parts of the net. There would
then be no difference between "local" and "global" apps.
Since parts of the address spaces can be externalized, indexing as rich
(and richer) to what we have now still can be done.
And so forth.
The Native Client part of Chrome finally allows what should have been done
in the first place (we are now about 20+ years after the first web
proposals by Berners-Lee).  However, this approach will need to be adopted
by most of the already existing multiple browsers before it can really be
used in a practical way in the world of personal computing -- and there
are signs that there is not a lot of agreement or understanding why this
would be a good thing.
The sad and odd thing is that so many people in the computer field were so
lacking in "systems consciousness" that they couldn't see this, and failed
to complain mightily as the web was being set up and a really painful
genii was being let out of the bottle.
As Kurt Vonnegut used to say "And so it goes".
Cheers,
Alan
________________________________
Sent: Sun, July 24, 2011 5:39:26 AM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam
Hi Alan,
as usual, it was inspiring talking to your colleagues and hearing you speak
at Potsdam.  I think I finally got the Model-T image, which resonated with
my fondness for Objective-C:  a language that a 17 year old with no
experience with compilers or runtimes can implement and that manages to
boil down dynamic OO/messaging to a single special function can't be all
bad :-)
There was one question I had on the scaling issue that would not have
fitted in the Q&A:  while praising the design of the Internet, you spoke
less well of the World Wide Web, which surprised me a bit.  Can you
elaborate?
Thanks,
Marcel
To All,
Post by Alan Kay
This wound up being a talk to several hundred students, so most of the
content is about "ways to think about things", with just a little about
scaling and STEPS at the end.
Cheers,
Alan
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
Best regards,
Igor Stasenko AKA sig.
Julian Leviston
2011-07-25 14:16:38 UTC
Permalink
Post by Igor Stasenko
Interestingly that many today's trendy and popular things (which we
know today as web) were invented as a temporary solution without any
systematic approach.
I think that i will be right by saying that most of these technologies
(like PHP, Javascript, Ruby, Sendmail etc) is a result of random
choice instead making planning and a deep study of problem field
before doing anything.
And that's why no surprise, they are failing to grow.
And now, people trying to fill the gaps in those technologies with
security, scalability and so on.. Because they are now well
established standards.. while originally was not meant to be used in
such form from the beginning.
Wow... really? PHP, JavaScript, Ruby and Sendmail are the result of random choice?

Javascript, PHP, Ruby and Sendmail failing to grow? Seriously? What do you mean by grow? It can't surely be popularity...

Julian.
Igor Stasenko
2011-07-25 15:33:23 UTC
Permalink
Post by Igor Stasenko
Interestingly that many today's trendy and popular things (which we
know today as web) were invented as a temporary solution without any
systematic approach.
I think that i will be right by saying that most of these technologies
(like PHP, Javascript, Ruby, Sendmail etc) is a result of random
choice instead making planning and a deep study of problem field
before doing anything.
And that's why no surprise, they are failing to grow.
And now, people trying to fill the gaps in those technologies with
security, scalability and so on.. Because they are now well
established standards.. while originally was not meant to be used in
such form from the beginning.
Wow... really? PHP, JavaScript, Ruby and Sendmail are the result of random choice?
Random. Because how something , which was done to satisfy minute needs
(i wanna script some interactions, so lets do it quick),
could grow into something mature without solid foundation?
If conceptual flaw is there from the very beginning, how you can "fix" it?

Here what author says about JS:

JS had to “look like Java” only less so, be Java’s dumb kid brother or
boy-hostage sidekick. Plus, I had to be done in ten days or something
worse than JS would have happened
Brendan Eich

Apparently a missing systematical approach then strikes back, once it
deployed, became popular and used by millions..
Post by Igor Stasenko
Javascript, PHP, Ruby and Sendmail failing to grow? Seriously? What do you
mean by grow? It can't surely be popularity...
Grow not in popularity of course.
Grow in serving our needs.
Post by Igor Stasenko
Julian.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
Best regards,
Igor Stasenko AKA sig.
Julian Leviston
2011-07-25 23:07:37 UTC
Permalink
Post by Igor Stasenko
Post by Igor Stasenko
Interestingly that many today's trendy and popular things (which we
know today as web) were invented as a temporary solution without any
systematic approach.
I think that i will be right by saying that most of these technologies
(like PHP, Javascript, Ruby, Sendmail etc) is a result of random
choice instead making planning and a deep study of problem field
before doing anything.
And that's why no surprise, they are failing to grow.
And now, people trying to fill the gaps in those technologies with
security, scalability and so on.. Because they are now well
established standards.. while originally was not meant to be used in
such form from the beginning.
Wow... really? PHP, JavaScript, Ruby and Sendmail are the result of random choice?
Random. Because how something , which was done to satisfy minute needs
(i wanna script some interactions, so lets do it quick),
could grow into something mature without solid foundation?
If conceptual flaw is there from the very beginning, how you can "fix" it?
JS had to “look like Java” only less so, be Java’s dumb kid brother or
boy-hostage sidekick. Plus, I had to be done in ten days or something
worse than JS would have happened
Brendan Eich
Except that JavaScript is one of the only common popular prototype based object oriented languages, which it turns out is an amazingly flexible system. I don't think this is random. Maybe rushed is what you mean here.

Apart from this, Ruby was DEFINITELY not random, or rushed. It's a delicate balance between form and pragmatic functionality. I'll grant you that the internals of the standard interpreter leave a lot to be desired, but I think this is perhaps less to do with randomness and more to do with the fact that perhaps Matz was entirely out of his depth when it came to "best of breed" for internal language structuring.

I think to say that these languages served as a temporary solution is not really very fair on most of them. PHP was basically designed to be an easy way to build dynamic web pages, and popularity drove it to where it is today.

I guess where you're coming from is you're attempting to say that none of these languages are being used for what they were originally designed for... possibly (I'd put my weight on saying hopefully) with the exception of Ruby, because Ruby was designed to be beautiful to code in, and to make programmers happy. Ruby is a general purpose language. I really don't know why you include Sendmail in this bunch.

I think you're kind of missing the point of the web not being structured properly, though... I think Alan's point is more the case that the fact that we had to use server side languages, as well as languages such as VBScript and JavaScript which the interpreter executes, is an illustration of the fact that STRUCTURALLY, the web is fairly broken. It has nothing to do with language choice (server- or client-side), really, but rather the fact that there is no set of conventions and readily usable standard for programming across the web in such a way that code is run in a protected way on machines where code needs to run.

I think as computer programmers, we get quite hung up on the specifics of languages and other potentially somewhat irrelevant details when perhaps they're not the most apposite concerns to be interested in.
Post by Igor Stasenko
Apparently a missing systematical approach then strikes back, once it
deployed, became popular and used by millions..
Post by Igor Stasenko
Javascript, PHP, Ruby and Sendmail failing to grow? Seriously? What do you
mean by grow? It can't surely be popularity...
Grow not in popularity of course.
Grow in serving our needs.
Perhaps you miss the point of why things become large and popular...? :) They're driven by people. And not just some people - by *most* people. Everybody wants to share photos and search for things on the web. Everyone wants their content, and purchases, and the things they want.

These people do not care about structural perfection in any way. They care about doing the things they want to do and naught else.

Look at Apple if you want to understand a group of people who "get" this (or maybe only Steve gets this, I don't really know, but I do know someone at Apple fully understand this, and possibly Apple *didn't* understand this when Steve wasn't there). The only way you can drive the future is if you get everyone to come along with you.

The only way you can get everyone to come along with you is to play to their understanding level. You make the general lives of everyone on the planet easier, and you will become popular.

Say, for example, like making a telephone that is vastly more easy to use than all other telephones on the planet. Now, for tech geeks, it's not really *that* much easier to use... For example, when the iPhone came out, I got one, and the only really useful and different thing in terms of technical specification and features that I could do that I previously couldn't do easily was synchronise my contacts... but everything was quite a bit EASIER to do. In the process, Apple are pushing next gen technologies (next gen for the public is not necessarily next gen for us, mind :)). Mind you, it comes wrapped around their bank account, but it's still coming.

Look at Twitter for an example of what people like... this is a ridiculously clear example... it simply allows people to write small messages to whoever is listening. Brilliantly simple, brilliantly clear. Most people want to do this, and so it is popular. The thing with twitter is, though, they're not using this popularity at all. They don't really know what to do with it.

Now, what we want to do is make something compelling enough such that it "goes off like a rocket". Smalltalk was designed pretty amazingly well, and it had an amazingly large amount of influence, but if you ask most programmers what smalltalk is, they usually haven't heard of it... contrast this to asking people about Java, and they know what that is. :) You even ask them what Object Oriented programming is, and they know that, but you say "Heard of Alan Kay?" and they give you a blank look. Ask them about Steve Jobs and everyone knows all about him. Hell, what other company has fanboys keeping track of their ADS? ( http://www.macrumors.com/2011/07/24/new-apple-ipad-ad-well-always/ )

What I'm trying to get at here, is that I see no reason why something free can't be popular (facebook? twitter?), but for that to take place, it has to provide something that you simply can't get elsewhere. The advantage the web has had is that it has moved quite quickly and continues to move at whatever pace we like to go at. Nothing else has come along that has outpaced or out innovated it FROM THE POINT OF VIEW OF THE AVERAGE PUNTER. So what is needed is something along the lines of Frank, which when people see what is possible (BY USING IT ONLY, I'd wager), they'll stop using everything else because they simply can't go back to "the old way" because it feels like the past too much. :)

Make something better than all the user or developer experiences out there, and developers like me will evangelise the shit out of it... and other users who care about things will jump on the bandwagon, curators of experience will jump on board, and overnight, a Windows 95 like experience will happen (in terms of market share effect), or perhaps an iPod effect will happen. Remember, it has to be "just better" than what is possible now, so if you make something "infinitely better" but just show off how it's "just better", and also make it easy to migrate to and easier to use, then you will have already "won" as the new way of doing things before you've started.

Even Apple, our current purveyors of "fine user experience" and curators of style and design, haven't managed to build a device or user experience in software that allows primarily convention, ease of use and unclutteredness, and yet then the total ability to configure things for people who want things to do exactly what they want them to do (ie coders, programmers, and advanced users). They hit the "80/20" rule quite well in terms of giving 80 percent of people everything they need, while leaving 20% of people sort of out in the cold.

Julian.
Igor Stasenko
2011-07-26 02:20:09 UTC
Permalink
Post by Julian Leviston
Post by Igor Stasenko
Post by Igor Stasenko
Interestingly that many today's trendy and popular things (which we
know today as web) were invented as a temporary solution without any
systematic approach.
I think that i will be right by saying that most of these technologies
(like PHP, Javascript, Ruby, Sendmail etc) is a result of random
choice instead making planning and a deep study of problem field
before doing anything.
And that's why no surprise, they are failing to grow.
And now, people trying to fill the gaps in those technologies with
security, scalability and so on.. Because they are now well
established standards.. while originally was not meant to be used in
such form from the beginning.
Wow... really? PHP, JavaScript, Ruby and Sendmail are the result of random choice?
Random. Because how something , which was done to satisfy minute needs
(i wanna script some interactions, so lets do it quick),
could grow into something mature without solid foundation?
If conceptual flaw is there from the very beginning, how you can "fix" it?
JS had to “look like Java” only less so, be Java’s dumb kid brother or
boy-hostage sidekick. Plus, I had to be done in ten days or something
worse than JS would have happened
Brendan Eich
Except that JavaScript is one of the only common popular prototype based object oriented languages, which it turns out is an amazingly flexible system. I don't think this is random. Maybe rushed is what you mean here.
I would say rushed and then got in wrong hands. Thankfully, today
things much better.
Post by Julian Leviston
Apart from this, Ruby was DEFINITELY not random, or rushed. It's a delicate balance between form and pragmatic functionality. I'll grant you that the internals of the standard interpreter leave a lot to be desired, but I think this is perhaps less to do with randomness and more to do with the fact that perhaps Matz was entirely out of his depth when it came to "best of breed" for internal language structuring.
You lost me here. My attitude to Ruby is same as to Perl: lets take
bit from here, bit from there, mix well everything and voila! , we
having new programming language.
It may be good for cooking recipe, but definitely not very good for
programming language.
I find it strange that many today's mainstream languages evolution is
driven by taking same approach: mix & blend things together, rather
than focusing on completeness, conciseness and clarity.

Like introducing generics in Java and C#.. oh yeah.. what a excellent
present for C++ programmers who miss it so bad! Now they can do mess
with beloved templates not only in C++ but also in Java and C#.
And now a learning curve for those language became even steppier. But
who cares, right? Since new "features" likely will increase sales :)
Post by Julian Leviston
I think to say that these languages served as a temporary solution is not really very fair on most of them. PHP was basically designed to be an easy way to build dynamic web pages, and popularity drove it to where it is today.
I guess where you're coming from is you're attempting to say that none of these languages are being used for what they were originally designed for... possibly (I'd put my weight on saying hopefully) with the exception of Ruby, because Ruby was designed to be beautiful to code in, and to make programmers happy. Ruby is a general purpose language. I really don't know why you include Sendmail in this bunch.
I think you're kind of missing the point of the web not being structured properly, though... I think Alan's point is more the case that the fact that we had to use server side languages, as well as languages such as VBScript and JavaScript which the interpreter executes, is an illustration of the fact that STRUCTURALLY, the web is fairly broken. It has nothing to do with language choice (server- or client-side), really, but rather the fact that there is no set of conventions and readily usable standard for programming across the web in such a way that code is run in a protected way on machines where code needs to run.
Yes, i found it strange, that on server side we're using one language,
while at client side another. It means that people have to learn at
least two languages for getting started.
I find it really strange that javascript found its niche only in web
browsers. And virtually nowhere else. Is it too bad as general-purpose
programming language?
Because given the situation, it suspiciously looks like that: we're
forced to use it, because it is the only language supported by modern
browsers.
Post by Julian Leviston
I think as computer programmers, we get quite hung up on the specifics of languages and other potentially somewhat irrelevant details when perhaps they're not the most apposite concerns to be interested in.
Post by Igor Stasenko
Apparently a missing systematical approach then strikes back, once it
deployed, became popular and used by millions..
Post by Igor Stasenko
Javascript, PHP, Ruby and Sendmail failing to grow? Seriously? What do you
mean by grow? It can't surely be popularity...
Grow not in popularity of course.
Grow in serving our needs.
Perhaps you miss the point of why things become large and popular...? :) They're driven by people. And not just some people - by *most* people. Everybody wants to share photos and search for things on the web. Everyone wants their content, and purchases, and the things they want.
These people do not care about structural perfection in any way. They care about doing the things they want to do and naught else.
Look at Apple if you want to understand a group of people who "get" this (or maybe only Steve gets this, I don't really know, but I do know someone at Apple fully understand this, and possibly Apple *didn't* understand this when Steve wasn't there). The only way you can drive the future is if you get everyone to come along with you.
The only way you can get everyone to come along with you is to play to their understanding level. You make the general lives of everyone on the planet easier, and you will become popular.
Well, i don't like when a programming is turned into a pop culture.
However it is mostly a reality today.

For things we buy, things we use ( i mean end-user products) it is
perfectly fine: i don't care who/how my microwave is working as long
as it does it job well.
But for programming its a bit different: you giving to people a tool
which they will use to craft their own products. And depending on how
good/bad this tool are, the end product's quality will vary.

And also, it would be too good to be true: if people would have to
choose between java and smalltalk based on "easy to use" criteria, i
doub't they would choose java.
Marketing takes its toll, the worse one. :)

Maybe the days of inventing new languages by scientists are gone, i don't know.
If you take a language and start pumping "popular" things into it
because crowd likes it, how far you can go? Up to the point that it
would take 10 years to learn all language aspects?
Up to the point that you cannot state definitive set of a syntax rules
for your language (read Ruby)?
Post by Julian Leviston
Say, for example, like making a telephone that is vastly more easy to use than all other telephones on the planet. Now, for tech geeks, it's not really *that* much easier to use... For example, when the iPhone came out, I got one, and the only really useful and different thing in terms of technical specification and features that I could do that I previously couldn't do easily was synchronise my contacts... but everything was quite a bit EASIER to do. In the process, Apple are pushing next gen technologies (next gen for the public is not necessarily next gen for us, mind :)). Mind you, it comes wrapped around their bank account, but it's still coming.
Look at Twitter for an example of what people like... this is a ridiculously clear example... it simply allows people to write small messages to whoever is listening. Brilliantly simple, brilliantly clear. Most people want to do this, and so it is popular.  The thing with twitter is, though, they're not using this popularity at all. They don't really know what to do with it.
Now, what we want to do is make something compelling enough such that it "goes off like a rocket". Smalltalk was designed pretty amazingly well, and it had an amazingly large amount of influence, but if you ask most programmers what smalltalk is, they usually haven't heard of it... contrast this to asking people about Java, and they know what that is. :) You even ask them what Object Oriented programming is, and they know that, but you say "Heard of Alan Kay?" and they give you a blank look. Ask them about Steve Jobs and everyone knows all about him. Hell, what other company has fanboys keeping track of their ADS? ( http://www.macrumors.com/2011/07/24/new-apple-ipad-ad-well-always/ )
What I'm trying to get at here, is that I see no reason why something free can't be popular (facebook? twitter?), but for that to take place, it has to provide something that you simply can't get elsewhere. The advantage the web has had is that it has moved quite quickly and continues to move at whatever pace we like to go at. Nothing else has come along that has outpaced or out innovated it FROM THE POINT OF VIEW OF THE AVERAGE PUNTER. So what is needed is something along the lines of Frank, which when people see what is possible (BY USING IT ONLY, I'd wager), they'll stop using everything else because they simply can't go back to "the old way" because it feels like the past too much. :)
Make something better than all the user or developer experiences out there, and developers like me will evangelise the shit out of it... and other users who care about things will jump on the bandwagon, curators of experience will jump on board, and overnight, a Windows 95 like experience will happen (in terms of market share effect), or perhaps an iPod effect will happen. Remember, it has to be "just better" than what is possible now, so if you make something "infinitely better" but just show off how it's "just better", and also make it easy to migrate to and easier to use, then you will have already "won" as the new way of doing things before you've started.
Even Apple, our current purveyors of "fine user experience" and curators of style and design, haven't managed to build a device or user experience in software that allows primarily convention, ease of use and unclutteredness, and yet then the total ability to configure things for people who want things to do exactly what they want them to do (ie coders, programmers, and advanced users). They hit the "80/20" rule quite well in terms of giving 80 percent of people everything they need, while leaving 20% of people sort of out in the cold.
I don't think its a good to drive an analogy between end product and tool(s).
The main difference between them lies in the fact that tools are made
for professionals, while end products are made for everyone.
You don't have to graduate college to know how to use microwave, you
just need to read a short instruction.
Professionals who basing their choice on popularity are bad
professionals, the good ones basing their choice on quality of tools.
Because everyone knows that popularity has a temporary effect.
Something which is popular today, will be forgotten tomorrow.

People jumping into Apple's bandwagon.. but what future are there?
None. Sealed platform, proprietary hadrware, ridiculous and
over-protective rules for entering the market.
So, it is easy to predict the outcome: the days of iWhatever is counted.
For those who thinks that i'm soothsaying - see what happened with Sun
and what happens with Microsoft.
If Apple will keep doing things in same way, there is no other end.

So, it is maybe great that they can make a lot of money today. And
then another company will arise and start making money. And again and
again, people will jump
into the wagon once a while. And repeat same mistakes. But who cares,
since it brings us money, today and a little bit for tomorrow :)
--
Best regards,
Igor Stasenko AKA sig.
Julian Leviston
2011-07-26 03:30:15 UTC
Permalink
Post by Igor Stasenko
You lost me here. My attitude to Ruby is same as to Perl: lets take
bit from here, bit from there, mix well everything and voila! , we
having new programming language.
It may be good for cooking recipe, but definitely not very good for
programming language.
I find it strange that many today's mainstream languages evolution is
driven by taking same approach: mix & blend things together, rather
than focusing on completeness, conciseness and clarity.
I don't think you understand Ruby very well. PERL and Ruby are quite different.
Sure, Ruby borrowed some stuff from PERL (such as built in RegExps, etc) but at its heart, it's pure objects, and behaves very much how you'd expect it to. It's also incredibly compact and beautiful looking, easy to read, and nice. I fell in love with it similarly to how I fell in love with SmallTalk...

Julian.
Julian Leviston
2011-07-26 03:37:43 UTC
Permalink
Post by Igor Stasenko
But for programming its a bit different: you giving to people a tool
which they will use to craft their own products. And depending on how
good/bad this tool are, the end product's quality will vary.
And also, it would be too good to be true: if people would have to
choose between java and smalltalk based on "easy to use" criteria, i
doub't they would choose java.
Marketing takes its toll, the worse one. :)
But they *did* choose Java over smalltalk precisely because it's easier to use.

You make the mistake of assuming easier to use between experts, but that's not how people adopt languages.

One of the reasons Rails became an overnight success for the Ruby and web development communities is because of a 15 minute blog screencast... and a bunch of simple evangelizing the creator of Rails did... he basically showed how easy it was to create a Blog in 15 minutes with comments... Mind you it wasn't a particularly beautiful Blog, but it functioned, nonetheless, and the kicker is...

... it was about twice as easy, twice as fast, and twice as nice to code than in any other comparative programming environment at the time.

People adoped Java because it was readily available to learn, and easy to "grok" in comparison with what they knew, and because it had "spunk" in the same way that Rails did - it had an attitude, and was perceived as a funky thing. This has to do with marketing and the way our society works. SmallTalk, is incredibly simple, incredibly powerful, but also INCREDIBLY unapproachable for most people not welcoming to abstract thought.

Contrast that it took me weeks to understand SmallTalk when I first saw it - even vaguely understand I mean - but it only took me days to understand Java, given that I'd programmed in Basic and C before.

This has to do with the sub-cultural context more than anything.

Julian.
Julian Leviston
2011-07-26 03:59:34 UTC
Permalink
Post by Igor Stasenko
Post by Julian Leviston
Say, for example, like making a telephone that is vastly more easy to use than all other telephones on the planet. Now, for tech geeks, it's not really *that* much easier to use... For example, when the iPhone came out, I got one, and the only really useful and different thing in terms of technical specification and features that I could do that I previously couldn't do easily was synchronise my contacts... but everything was quite a bit EASIER to do. In the process, Apple are pushing next gen technologies (next gen for the public is not necessarily next gen for us, mind :)). Mind you, it comes wrapped around their bank account, but it's still coming.
Look at Twitter for an example of what people like... this is a ridiculously clear example... it simply allows people to write small messages to whoever is listening. Brilliantly simple, brilliantly clear. Most people want to do this, and so it is popular. The thing with twitter is, though, they're not using this popularity at all. They don't really know what to do with it.
Now, what we want to do is make something compelling enough such that it "goes off like a rocket". Smalltalk was designed pretty amazingly well, and it had an amazingly large amount of influence, but if you ask most programmers what smalltalk is, they usually haven't heard of it... contrast this to asking people about Java, and they know what that is. :) You even ask them what Object Oriented programming is, and they know that, but you say "Heard of Alan Kay?" and they give you a blank look. Ask them about Steve Jobs and everyone knows all about him. Hell, what other company has fanboys keeping track of their ADS? (http://www.macrumors.com/2011/07/24/new-apple-ipad-ad-well-always/ )
What I'm trying to get at here, is that I see no reason why something free can't be popular (facebook? twitter?), but for that to take place, it has to provide something that you simply can't get elsewhere. The advantage the web has had is that it has moved quite quickly and continues to move at whatever pace we like to go at. Nothing else has come along that has outpaced or out innovated it FROM THE POINT OF VIEW OF THE AVERAGE PUNTER. So what is needed is something along the lines of Frank, which when people see what is possible (BY USING IT ONLY, I'd wager), they'll stop using everything else because they simply can't go back to "the old way" because it feels like the past too much. :)
Make something better than all the user or developer experiences out there, and developers like me will evangelise the shit out of it... and other users who care about things will jump on the bandwagon, curators of experience will jump on board, and overnight, a Windows 95 like experience will happen (in terms of market share effect), or perhaps an iPod effect will happen. Remember, it has to be "just better" than what is possible now, so if you make something "infinitely better" but just show off how it's "just better", and also make it easy to migrate to and easier to use, then you will have already "won" as the new way of doing things before you've started.
Even Apple, our current purveyors of "fine user experience" and curators of style and design, haven't managed to build a device or user experience in software that allows primarily convention, ease of use and unclutteredness, and yet then the total ability to configure things for people who want things to do exactly what they want them to do (ie coders, programmers, and advanced users). They hit the "80/20" rule quite well in terms of giving 80 percent of people everything they need, while leaving 20% of people sort of out in the cold.
I don't think its a good to drive an analogy between end product and tool(s).
The main difference between them lies in the fact that tools are made
for professionals, while end products are made for everyone.
You don't have to graduate college to know how to use microwave, you
just need to read a short instruction.
Professionals who basing their choice on popularity are bad
professionals, the good ones basing their choice on quality of tools.
Because everyone knows that popularity has a temporary effect.
Something which is popular today, will be forgotten tomorrow.
That's just silly. Products vs Tools? A toaster is a device that I can use to toast bread. A coffee machine is a device i can use to make coffee. Professional people who create coffee or create toasted sandwiches for a living use different ones, but they're still coffee machines and toasters, and mostly they're just based around higher volume, and higher quality in terms of controls.

Popularity doesn't always have a temporary effect. Consider the iPod. It's not forgotten is it? It's been popular for decades.

Consider the personal computer! The laptop - this is a very popular device. It's been more than 2 decades that it's been popular for. I think your logic and reasoning is fairly flawed there.

10 years ago, a soldering iron was considered a tool, but these days, I can buy one for around $50 that will be of comparable quality to something that would have cost over $200 back then. Which is a tool and which is a product?

Obscurity vs popularity, along with easy to use vs hard to use should NOT be mapped on to whether or not something is useful. In other words, consider how easy it is these days to take what were considered professional quality photos 8 years ago, but with devices that cost less than $200.

Like it or not, "common" people who do not have degrees have something valuable to input. They have stories to tell and quite good ideas. Children fit into this category of "common" people, and so does anyone who would like to learn.

My imperative is that the things you learn today should be usable today, not put off until years from now. Yes, things should be considered, but this doesn't mean you can't use the knowledge you gain moment by moment. People should be able to learn how to use a tool or product, and then use it to that capacity... and the learn some more, and use that...

THAT is the real driver of education, and you have no idea where "the masses" can take things - who to you, no doubt, appear quite "stupid" because they don't have a degree or whatever. Let's not forget that some of our best minds started out without a degree... :) and some of them don't even and didn't have degrees.

Authorisation of knowledge does not constitute intelligence!
Post by Igor Stasenko
People jumping into Apple's bandwagon.. but what future are there?
None. Sealed platform, proprietary hadrware, ridiculous and
over-protective rules for entering the market.
So, it is easy to predict the outcome: the days of iWhatever is counted.
For those who thinks that i'm soothsaying - see what happened with Sun
and what happens with Microsoft.
If Apple will keep doing things in same way, there is no other end.
This is your take. Let's see where it goes, and for this conversation, really, who cares? I don't think Sun nor Microsoft were as popular as iWhatever, though.
Post by Igor Stasenko
So, it is maybe great that they can make a lot of money today. And
then another company will arise and start making money. And again and
again, people will jump
into the wagon once a while. And repeat same mistakes. But who cares,
since it brings us money, today and a little bit for tomorrow :)
I don't think it's about making money primarily. It's mostly about enabling stuff for people. I think we want to live in a world where we can get whatever we like, and get whatever we want done done, both quickly and easily.

The web is great because it lets me "just get it done", whatever it is I want to do... I can go find "that bit of a movie", or find out about "that bit of code" pretty quickly and easily.

Note that there are better ways... ways that involve less frustration... and therefore to someone "in the know" enough there are openings where they could take advantage of this, and leverage "their" solution in to place. Regardless of whether they're making a profit on it or not.

I'm essentially a pragmatist in search of perfection... basically I want to get things done as quickly and efficiently as possible, barebones, and then improve from there... I'll generally choose whatever I can to do the simplest thing first to that end, and then improve from there...

Julian.
Igor Stasenko
2011-07-26 04:38:26 UTC
Permalink
(again quotes are broken)
Post by Julian Leviston
Say, for example, like making a telephone that is vastly more easy to use
than all other telephones on the planet. Now, for tech geeks, it's not
really *that* much easier to use... For example, when the iPhone came out, I
got one, and the only really useful and different thing in terms of
technical specification and features that I could do that I previously
couldn't do easily was synchronise my contacts... but everything was quite a
bit EASIER to do. In the process, Apple are pushing next gen technologies
(next gen for the public is not necessarily next gen for us, mind :)). Mind
you, it comes wrapped around their bank account, but it's still coming.
Look at Twitter for an example of what people like... this is a ridiculously
clear example... it simply allows people to write small messages to whoever
is listening. Brilliantly simple, brilliantly clear. Most people want to do
this, and so it is popular.  The thing with twitter is, though, they're not
using this popularity at all. They don't really know what to do with it.
Now, what we want to do is make something compelling enough such that it
"goes off like a rocket". Smalltalk was designed pretty amazingly well, and
it had an amazingly large amount of influence, but if you ask most
programmers what smalltalk is, they usually haven't heard of it... contrast
this to asking people about Java, and they know what that is. :) You even
ask them what Object Oriented programming is, and they know that, but you
say "Heard of Alan Kay?" and they give you a blank look. Ask them about
Steve Jobs and everyone knows all about him. Hell, what other company has
fanboys keeping track of their ADS?
(http://www.macrumors.com/2011/07/24/new-apple-ipad-ad-well-always/ )
What I'm trying to get at here, is that I see no reason why something free
can't be popular (facebook? twitter?), but for that to take place, it has to
provide something that you simply can't get elsewhere. The advantage the web
has had is that it has moved quite quickly and continues to move at whatever
pace we like to go at. Nothing else has come along that has outpaced or out
innovated it FROM THE POINT OF VIEW OF THE AVERAGE PUNTER. So what is needed
is something along the lines of Frank, which when people see what is
possible (BY USING IT ONLY, I'd wager), they'll stop using everything else
because they simply can't go back to "the old way" because it feels like the
past too much. :)
Make something better than all the user or developer experiences out there,
and developers like me will evangelise the shit out of it... and other users
who care about things will jump on the bandwagon, curators of experience
will jump on board, and overnight, a Windows 95 like experience will happen
(in terms of market share effect), or perhaps an iPod effect will happen.
Remember, it has to be "just better" than what is possible now, so if you
make something "infinitely better" but just show off how it's "just better",
and also make it easy to migrate to and easier to use, then you will have
already "won" as the new way of doing things before you've started.
Even Apple, our current purveyors of "fine user experience" and curators of
style and design, haven't managed to build a device or user experience in
software that allows primarily convention, ease of use and unclutteredness,
and yet then the total ability to configure things for people who want
things to do exactly what they want them to do (ie coders, programmers, and
advanced users). They hit the "80/20" rule quite well in terms of giving 80
percent of people everything they need, while leaving 20% of people sort of
out in the cold.
I don't think its a good to drive an analogy between end product and tool(s).
The main difference between them lies in the fact that tools are made
for professionals, while end products are made for everyone.
You don't have to graduate college to know how to use microwave, you
just need to read a short instruction.
Professionals who basing their choice on popularity are bad
professionals, the good ones basing their choice on quality of tools.
Because everyone knows that popularity has a temporary effect.
Something which is popular today, will be forgotten tomorrow.
That's just silly. Products vs Tools? A toaster is a device that I can use
to toast bread. A coffee machine is a device i can use to make coffee.
Professional people who create coffee or create toasted sandwiches for a
living use different ones, but they're still coffee machines and toasters,
and mostly they're just based around higher volume, and higher quality in
terms of controls.
If everything is so easy to handle, then why we need an education at all?
You don't like driving a separation between professionals and rest of people?
Then put yours.
My point was that professional choice based on knowledge in field,
while rest of people
by not having this knowledge have to base their choice on something
else (adverisement, expert's opinion etc).
Post by Julian Leviston
Popularity doesn't always have a temporary effect. Consider the iPod. It's
not forgotten is it? It's been popular for decades.
Consider the personal computer! The laptop - this is a very popular device.
It's been more than 2 decades that it's been popular for. I think your logic
and reasoning is fairly flawed there.
Space flight also popular more than two decades. Luckily, so far
nobody claimed the name of iSpace
to make money out of it :)
You analogy doesn't holds because not everything we're using was
invented by Apple.
Yes, they know how to wrap things into a good shiny paper.. but for me
it is more interesting what is inside rather
than outside.
Post by Julian Leviston
10 years ago, a soldering iron was considered a tool, but these days, I can
buy one for around $50 that will be of comparable quality to something that
would have cost over $200 back then. Which is a tool and which is a product?
Obscurity vs popularity, along with easy to use vs hard to use should NOT be
mapped on to whether or not something is useful. In other words, consider
how easy it is these days to take what were considered professional quality
photos 8 years ago, but with devices that cost less than $200.
Like it or not, "common" people who do not have degrees have something
valuable to input. They have stories to tell and quite good ideas. Children
fit into this category of "common" people, and so does anyone who would like
to learn.
My imperative is that the things you learn today should be usable today, not
put off until years from now. Yes, things should be considered, but this
doesn't mean you can't use the knowledge you gain moment by moment. People
should be able to learn how to use a tool or product, and then use it to
that capacity... and the learn some more, and use that...
THAT is the real driver of education, and you have no idea where "the
masses" can take things - who to you, no doubt, appear quite "stupid"
because they don't have a degree or whatever. Let's not forget that some of
our best minds started out without a degree... :) and some of them don't
even and didn't have degrees.
Like me. I don't have degree. I am self educated.
And what i learned is that you need to have knowledge to do the right choice!
My credo is to question everything and not follow assumptions.
Because if one's choice is to "follow the crowd" without any clue
where it goes and why it goes there,
then smarter people will abuse you sooner or later.
This is what happens every time, if you look at history, and Apple is
far from being an exception. :)

And of course, there's nothing bad with following everyone, as long as
there is no risk for being abused.
Post by Julian Leviston
Authorisation of knowledge does not constitute intelligence!
Indeed.
Post by Julian Leviston
People jumping into Apple's bandwagon.. but what future are there?
None. Sealed platform, proprietary hadrware, ridiculous and
over-protective rules for entering the market.
So, it is easy to predict the outcome: the days of iWhatever is counted.
For those who thinks that i'm soothsaying - see what happened with Sun
and what happens with Microsoft.
If Apple will keep doing things in same way, there is no other end.
This is your take. Let's see where it goes, and for this conversation,
really, who cares? I don't think Sun nor Microsoft were as popular as
iWhatever, though.
So, it is maybe great that they can make a lot of money today. And
then another company will arise and start making money. And again and
again, people will jump
into the wagon once a while. And repeat same mistakes. But who cares,
since it brings us money, today and a little bit for tomorrow :)
I don't think it's about making money primarily. It's mostly about enabling
stuff for people. I think we want to live in a world where we can get
whatever we like, and get whatever we want done done, both quickly and
easily.
The web is great because it lets me "just get it done", whatever it is I
want to do... I can go find "that bit of a movie", or find out about "that
bit of code" pretty quickly and easily.
Note that there are better ways... ways that involve less frustration... and
therefore to someone "in the know" enough there are openings where they
could take advantage of this, and leverage "their" solution in to place.
Regardless of whether they're making a profit on it or not.
I'm essentially a pragmatist in search of perfection... basically I want to
get things done as quickly and efficiently as possible, barebones, and then
improve from there... I'll generally choose whatever I can to do the
simplest thing first to that end, and then improve from there...
Julian.
I doing the same. But i'm a little poisoned by perfection :)
Post by Julian Leviston
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
Best regards,
Igor Stasenko AKA sig.
Alan Kay
2011-07-26 05:47:23 UTC
Permalink
The argument about "mass popularity" is good if all you want to do is triumph in
the consumer products business (c.f. many previous raps I've done about the
anthropological "human universals" and how and why technological amplifiers for
them have been and will be very popular).

This is because marketeers are generally interested in what people *want* and
desire to supply those wants and to get those wants to intersect with products.

Educators are more interested in what people *need*, and many of these *needs*
in my opinion are coextensive with human "non-universals" -- inventions (not
genetic built-ins) such as reading and writing, modern scientific thinking and
mathematics, deep historical perspectives, the law, equal rights, and many other
rather difficult to learn and difficult to invent ideas.

One of the most important points here is that becoming fluently skilled in a
hard to learn area produces an odd -- but I think better -- kind of human ...
one who has not just the inborn drives -- for example, revenge and vendetta are
human universals -- but also has an overlay of other kinds of thinking that can
in many cases moderate and sometimes head off impulses that might have been
workable 200,000 years ago but are not good actions now.

As far as can be ascertained, humans had been on the planet for almost 200,000
years before any of these were invented, and modern science was created only
about 400 years ago. We are still trying to invent and teach and learn human
rights. These are not only not obvious to our genetic brains, they are virtually
invisible!

A mass market place will have to be above high thresholds in knowledge before it
can make good choices about these.

Societies have always had to decide how to educate children into adults (though
most have not been self-conscious about this).

If ways could be found to make the learning of the really important stuff
"popular" and "wanted", then things are easier and simpler.


But the dilemma is: what happens if this is the route and the children and
adults reject it for the much more alluring human universals? Even if almost
none of them lead to a stable, thriving, growth inducing and prosperous
civilization?

These are the issues I care about.

If we look in the small at computing, and open it to a popular culture, we will
get a few good things (as we do in pop music), but most of what is rich in most
invented and developed areas will be not even seen, will not be learned, and a
few things will be re-invented in much worse forms ("reinventing the flat
tire").

This is partly because knowledge is generally more powerful than cleverness, and
point of view is more powerful than either.

I think education at the highest possible levels has always been the main issues
for human beings, especially after the difficult to learn powerful inventions
started to appear.

For example, what was most important about writing was not that it could take
down oral discourse, but that it led to new ways of thinking, arguing and
discourse, and was one of the main underpinings of many other inventions.
Similarly, what is important about computing is not that it can "take down" old
media, useful as that is, or provide other conveniences through simple
scripting, but that it constitutes a new and much more powerful way to think
about, embody, argue and invent powerful ideas that can help us gain perspective
on the dilemmas created by "being humans who are amplified by technologies". If
the legacy of the last several centuries is to "automate the Pleistocene" via
catering to and supplying great power to human universals, then monumental
disaster is not far off. As H.G. Wells pointed out "We are in a race between
Education and Catastrophe". It is hard to see that real education is ahead at
this point.

One of the great dilemmas of "equal rights" and other equalities is how to deal
with the "Tyranny of the Commons". The "American Plan" was to raise the commons
to be able to participate in the same levels of "conversations" as the best
thinkers. I think this is far from the situation at the current time.

Much of this is quite invisible to any culture that is "trying to get by" and
lacks systems and historical consciousness.

The trivial take on computing today by both the consumers and most of the
"professionals" would just be another "pop music" to wince at most of the time,
if it weren't so important for how future thinking should be done.

Best wishes,

Alan
Casey Ransberger
2011-07-26 23:01:23 UTC
Permalink
I want to try using a fluffy pop song to sell a protest album... it worked for others before me:) If you're really lucky, some people will accidentally listen to your other songs.

(metaphorically speaking)

"A spoonful of sugar"
--
Casey
The trivial take on computing today by both the consumers and most of the "professionals" would just be another "pop music" to wince at most of the time, if it weren't so important for how future thinking should be done.
Jecel Assumpcao Jr.
2011-07-26 23:17:32 UTC
Permalink
Post by Casey Ransberger
I want to try using a fluffy pop song to sell a protest album... it worked for
others before me:) If you're really lucky, some people will accidentally listen
to your other songs.
(metaphorically speaking)
"A spoonful of sugar"
http://netjam.org/spoon/

http://www.sugarlabs.org/

Sounds like a plan!
-- Jecel
Julian Leviston
2011-07-26 07:19:40 UTC
Permalink
But the dilemma is: what happens if this is the route and the children and adults reject it for the much more alluring human universals? Even if almost none of them lead to a stable, thriving, growth inducing and prosperous civilization?
These are the issues I care about.
You seem to be seeing these two as orthogonal. I see them as mutually complementing. (ie we drive people to what they need through what they want... )

Julian.
Julian Leviston
2011-07-25 14:26:12 UTC
Permalink
Post by Igor Stasenko
In contrast, as you mentioned, TCP/IP protocol which is backbone of
today's internet having much better design.
But i think this is a general problem of software evolution. No matter
how hard you try, you cannot foresee all kinds of interactions,
features and use cases for your system, when you designing it from the
beginning.
Because 20 years ago, systems has completely different requirements,
comparing to today's ones. So, what was good enough 20 years ago,
today is not very good.
That makes no sense to me at all. How were the requirements radically different?

I still use my computer to play games, communicate with friends and family, solve problems, author text, make music and write programs. That's what I did with my computer twenty years ago. My requirements are the same. Of course, the sophistication and capacity of the programs has grown considerably... so has the hardware... but the actual requirements haven't changed much at all.
Post by Igor Stasenko
And here the problem: is hard to radically change the software,
especially core concepts, because everyone using it, get used to it ,
because it made standard.
So you have to maintain compatibility and invent workarounds , patches
and fixes on top of existing things, rather than radically change the
landscape.
I disagree with this entirely. Apple manage to change software radically... by tying it with hardware upgrades (speed/capacity in hardware) and other things people want (new features, ease of use). Connect something people want with shifts in software architecture, or make the shift painless and give some kind of advantage and people will upgrade, so long as the upgrade doesn't somehow detract from the original, that is. Of course, if you don't align something people want with software, people won't generally upgrade.
Igor Stasenko
2011-07-25 15:43:38 UTC
Permalink
(quotes are broken)
Post by Igor Stasenko
In contrast, as you mentioned, TCP/IP protocol which is backbone of
today's internet having much better design.
But i think this is a general problem of software evolution. No matter
how hard you try, you cannot foresee all kinds of interactions,
features and use cases for your system, when you designing it from the
beginning.
Because 20 years ago, systems has completely different requirements,
comparing to today's ones. So, what was good enough 20 years ago,
today is not very good.
That makes no sense to me at all. How were the requirements radically different?
I still use my computer to play games, communicate with friends and family,
solve problems, author text, make music and write programs. That's what I
did with my computer twenty years ago. My requirements are the same. Of
course, the sophistication and capacity of the programs has grown
considerably... so has the hardware... but the actual requirements haven't
changed much at all.
If capacity of programs has grown, then there was a reason for it
(read requirements)?
Because if you stating that you having same requirements as 20 years
ago, then why you don't using those old systems,
but instead using today's ones?

Speaking of requirements, a tooday's browser (Firefox) running on my
machine takes more than 500Mb of system memory.
I have no idea, why it consuming that much.. the fact is that you
cannot run it on any 20-years old personal computer.
Post by Igor Stasenko
And here the problem: is hard to radically change the software,
especially core concepts, because everyone using it, get used to it ,
because it made standard.
So you have to maintain compatibility and invent workarounds , patches
and fixes on top of existing things, rather than radically change the
landscape.
I disagree with this entirely. Apple manage to change software radically...
by tying it with hardware upgrades (speed/capacity in hardware) and other
things people want (new features, ease of use). Connect something people
want  with shifts in software architecture, or make the shift painless and
give some kind of advantage and people will upgrade, so long as the upgrade
doesn't somehow detract from the original, that is. Of course, if you don't
align something people want with software, people won't generally upgrade.
Apple can do whatever they want with their own proprietary hardware
and software, as long as its their own.
Now try to repeat the same in context of Web.
Even if Apple will rewrite their Safari 5 times per year, they will
still has to support HTTP, HTML, Javascript etc.
So, you miss my point.
Post by Igor Stasenko
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
Best regards,
Igor Stasenko AKA sig.
Julian Leviston
2011-07-25 23:40:11 UTC
Permalink
Post by Igor Stasenko
(quotes are broken)
Post by Igor Stasenko
In contrast, as you mentioned, TCP/IP protocol which is backbone of
today's internet having much better design.
But i think this is a general problem of software evolution. No matter
how hard you try, you cannot foresee all kinds of interactions,
features and use cases for your system, when you designing it from the
beginning.
Because 20 years ago, systems has completely different requirements,
comparing to today's ones. So, what was good enough 20 years ago,
today is not very good.
That makes no sense to me at all. How were the requirements radically different?
I still use my computer to play games, communicate with friends and family,
solve problems, author text, make music and write programs. That's what I
did with my computer twenty years ago. My requirements are the same. Of
course, the sophistication and capacity of the programs has grown
considerably... so has the hardware... but the actual requirements haven't
changed much at all.
If capacity of programs has grown, then there was a reason for it
(read requirements)?
Because if you stating that you having same requirements as 20 years
ago, then why you don't using those old systems,
but instead using today's ones?
Well, Igor, if something more efficient comes along, I will use it, and it will *probably* work just fine on 20 year old hardware... because *my* requirements haven't changed much. I will grant you that it's probably going to be quite hard to get a commodore 64 connected to a router, because its not very compatible, but what I'm trying to say here is that most of the "requirements" you're talking about are actually self-imposed by our computing system. Having something that can do 2.5 million instructions per second is ludicrous if all I want to do is type my document, isn't it? Surely any machine should be able to handle typing a document. ;-) (Note here, I'm obviously ignoring the fact that nowadays, we have unicode).

What I'm getting at is *MY* requirements haven't changed much. I still want to send a communication to my mother every now and then, and I still want to play games. In fact, some of my favourite games, I actually use emulators to play... emulators that run 20 year old hardware emulation so I can play the games which will not run on today's machines ;-)

One of my favourite games is Tetris Attack, which me and my friend play on his XBOX (original, not 360) in a Super Nintendo Emulator...

Do you find that amusing? I sure as hell do. :)

But I digress - my intentions are relatively similar that they were 20 years ago... I like to write programs, and I like to use programs to draw, and I like to listen to music, solve problems, create texts, make music... etc. The IMPLEMENTATIONS of how I went about this are vastly different, and so if you like you can bend "requirements" to a systems-view of requirements... and then I will agree with you... my requirements that I have today of my computer in terms of TECHNICAL requirements are vastly different, but in terms of interpersonal requirements, they're not at all different - maybe slightly...

Making music satisfies a creative impulse in me, and I can make it using my $10,000 computer system that I have today, or I can satisfy it using a synthesizer from the 80's. One of them does a vastly better job for me, but this is a qualitative issue, not a requirements issue ;-)
Post by Igor Stasenko
Speaking of requirements, a tooday's browser (Firefox) running on my
machine takes more than 500Mb of system memory.
I have no idea, why it consuming that much.. the fact is that you
cannot run it on any 20-years old personal computer.
Well this is the point of the STEPS project and the like - get rid of the cruft, and we will have an optimized system that will run like lightning on our current day processors with all their amazing amount of memory.
Post by Igor Stasenko
Post by Igor Stasenko
And here the problem: is hard to radically change the software,
especially core concepts, because everyone using it, get used to it ,
because it made standard.
So you have to maintain compatibility and invent workarounds , patches
and fixes on top of existing things, rather than radically change the
landscape.
I disagree with this entirely. Apple manage to change software radically...
by tying it with hardware upgrades (speed/capacity in hardware) and other
things people want (new features, ease of use). Connect something people
want with shifts in software architecture, or make the shift painless and
give some kind of advantage and people will upgrade, so long as the upgrade
doesn't somehow detract from the original, that is. Of course, if you don't
align something people want with software, people won't generally upgrade.
Apple can do whatever they want with their own proprietary hardware
and software, as long as its their own.
Now try to repeat the same in context of Web.
Even if Apple will rewrite their Safari 5 times per year, they will
still has to support HTTP, HTML, Javascript etc.
So, you miss my point.
Yes, Apple can, and to a large degree, ARE doing this. Their iOS platform is their best attempt yet at building an infrastructure of code that runs across the internet but isn't the web, doesn't rely on the web, and yet uses the internet for its communications mechanism (ie not necessarily the web).

I'm not really missing your point. ;-) I turned Adobe Flash off on my main browser a while back, and that's been an interesting experience... seeing how lots of people have put all their "data" into that technology (for example, ordering a pizza with pizza hut is impossible without flash in Australia) ;-) I can do it with an iPhone, though ;-) And yeah, I'm aware they both use HTTP.

I guess my question is... what's stopping an alternative, replacement, backwardly-compatible protocol from taking over where http and https leave off? And what would that protocol do? One of the issues is surely the way our router-system structure is in place... if there was going to be a replacement for the web, it would *have* to end up being properly web based (down to the packet level), surely... because I simply hate the fact that if three people in my house request the front page of the financial times, our computers all have to go get it separately. Why don't the other two get it off the first one, or at the very least, off the router?

I really think it's necessary to have a disconnect between intention and implementation. That would let us be clear about best practices for implementation in connection with a particular intention. As computer programmers, we rarely focus on the separation between the two, but they're intimately related and yet also quite definitely separate. Separating them allows us to allow for differences and allows us to be accepting of varying methods.

For example... my intention perhaps is to make coffee... I have my way of making it with my espresso machine that I really love. I have a house mate who loves his coffee a certain way. I don't like it when it's made like that - that's his implementation of his intention to enjoy a cup of coffee, but I know how to make it so he loves it, actually possibly better than HE can make it (wow that's an interesting thing isn't it?) because I understand his intention (he loves his coffee with just this certain balance of coffee, sugar, water and milk) and I know this from making him many cups of coffee... but we both have different implementations of making him a cup of coffee. I use an espresso machine, he uses a filter machine. If he asked me to make it using his filter machine, I could easily do that... and yet still have the same intention - to make him a cup of excellent coffee... :)

Do you see? Now, we don't have this in computing. We need it desperately, because computers can actually mostly handle implementations quite well (see LLVM) so long as they have their intentions carefully communicated to them.

We don't even have languages of intention - just languages of implementation. We're left to "abstract out" the intention from reading the implementation.

WHAT A FUCKING JOKE. (Please excuse the swearing - it's there for extreme emphasis, not rudeness),

Here's another interesting thing... If I go to a hotdog vendor, this is one method of satisfying my hunger... I can call this an implementation of satisfying my hunger. Now, I can also have different implementations of going to a hotdog vendor... (do you see where I'm going with this?) Intention and implementation are recursive, overlapping enclosing concerns.

If my intention is to satisfy my hunger and nothing more, then a hotdog vendor would do just as well as a five star restaurant. Perhaps even more so because I don't have to follow any protocols on HOW I eat... if my intention is to satisfy my hunger AND to give my body a wonderful set of nutrients, then perhaps I cant go to the hotdog vendor anymore, but a different set of options (implementations) arise...

I "throw" this stuff out there, but this is the most important thought work I've done in my entire life. I think it's vastly important, and most humans ignore it entirely.

Julian.
Igor Stasenko
2011-07-26 02:50:48 UTC
Permalink
Post by Julian Leviston
Post by Igor Stasenko
(quotes are broken)
Post by Igor Stasenko
In contrast, as you mentioned, TCP/IP protocol which is backbone of
today's internet having much better design.
But i think this is a general problem of software evolution. No matter
how hard you try, you cannot foresee all kinds of interactions,
features and use cases for your system, when you designing it from the
beginning.
Because 20 years ago, systems has completely different requirements,
comparing to today's ones. So, what was good enough 20 years ago,
today is not very good.
That makes no sense to me at all. How were the requirements radically different?
I still use my computer to play games, communicate with friends and family,
solve problems, author text, make music and write programs. That's what I
did with my computer twenty years ago. My requirements are the same. Of
course, the sophistication and capacity of the programs has grown
considerably... so has the hardware... but the actual requirements haven't
changed much at all.
If capacity of programs has grown, then there was a reason for it
(read requirements)?
Because if you stating that you having same requirements as 20 years
ago, then why you don't using those old systems,
but instead using today's ones?
Well, Igor, if something more efficient comes along, I will use it, and it will *probably* work just fine on 20 year old hardware... because *my* requirements haven't changed much. I will grant you that it's probably going to be quite hard to get a commodore 64 connected to a router, because its not very compatible, but what I'm trying to say here is that most of the "requirements" you're talking about are actually self-imposed by our computing system. Having something that can do 2.5 million instructions per second is ludicrous if all I want to do is type my document, isn't it? Surely any machine should be able to handle typing a document. ;-) (Note here, I'm obviously ignoring the fact that nowadays, we have unicode).
What I'm getting at is *MY* requirements haven't changed much. I still want to send a communication to my mother every now and then, and I still want to play games. In fact, some of my favourite games, I actually use emulators to play... emulators that run 20 year old hardware emulation so I can play the games which will not run on today's machines ;-)
If you would ask me, i'd prefer to use 10-years old word processors
for authoring documents. Simply because they run faster and doing
things quite well for my needs. And because apart from revamped
interface (which you need to (re)learn agian),
and enormous memory requirements, a new versions of them offering
little in addition to what they already had in older versions.
Post by Julian Leviston
One of my favourite games is Tetris Attack, which me and my friend play on his XBOX (original, not 360) in a Super Nintendo Emulator...
Do you find that amusing? I sure as hell do. :)
You're not alone.
Post by Julian Leviston
But I digress - my intentions are relatively similar that they were 20 years ago... I like to write programs, and I like to use programs to draw, and I like to listen to music, solve problems, create texts, make music... etc. The IMPLEMENTATIONS of how I went about this are vastly different, and so if you like you can bend "requirements" to a systems-view of requirements... and then I will agree with you... my requirements that I have today of my computer in terms of TECHNICAL requirements are vastly different, but in terms of interpersonal requirements, they're not at all different - maybe slightly...
Making music satisfies a creative impulse in me, and I can make it using my $10,000 computer system that I have today, or I can satisfy it using a synthesizer from the 80's. One of them does a vastly better job for me, but this is a qualitative issue, not a requirements issue ;-)
That's the main problem of our "progress" in computing field that it
is mostly a quantitative but not qualitative one: how many GHz , how
much memory, how much giga-texels per second etc.
Post by Julian Leviston
Post by Igor Stasenko
Speaking of requirements,  a tooday's browser (Firefox) running on my
machine takes more than 500Mb of system memory.
I have no idea, why it consuming that much.. the fact is that you
cannot run it on any 20-years old personal computer.
Well this is the point of the STEPS project and the like - get rid of the cruft, and we will have an optimized system that will run like lightning on our current day processors with all their amazing amount of memory.
Post by Igor Stasenko
Post by Igor Stasenko
And here the problem: is hard to radically change the software,
especially core concepts, because everyone using it, get used to it ,
because it made standard.
So you have to maintain compatibility and invent workarounds , patches
and fixes on top of existing things, rather than radically change the
landscape.
I disagree with this entirely. Apple manage to change software radically...
by tying it with hardware upgrades (speed/capacity in hardware) and other
things people want (new features, ease of use). Connect something people
want  with shifts in software architecture, or make the shift painless and
give some kind of advantage and people will upgrade, so long as the upgrade
doesn't somehow detract from the original, that is. Of course, if you don't
align something people want with software, people won't generally upgrade.
Apple can do whatever they want with their own proprietary hardware
and software, as long as its their own.
Now try to repeat the same in context of Web.
Even if Apple will rewrite their Safari 5 times per year, they will
still has to support HTTP, HTML, Javascript etc.
So, you miss my point.
Yes, Apple can, and to a large degree, ARE doing this. Their iOS platform is their best attempt yet at building an infrastructure of code that runs across the internet but isn't the web, doesn't rely on the web, and yet uses the internet for its communications mechanism (ie not necessarily the web).
I'm not really missing your point. ;-) I turned Adobe Flash off on my main browser a while back, and that's been an interesting experience... seeing how lots of people have put all their "data" into that technology (for example, ordering a pizza with pizza hut is impossible without flash in Australia) ;-) I can do it with an iPhone, though ;-) And yeah, I'm aware they both use HTTP.
Now imagine you don't have not only Flash, but also banned from using
web. There is no browser on iWhatever, because web declared "evil".
Would you still buy their product(s)? :)
So, it is really not in the power of Apple to change that. Otherwise
they would do it as easy as with banning flash.
Post by Julian Leviston
I guess my question is... what's stopping an alternative, replacement, backwardly-compatible protocol from taking over where http and https leave off? And what would that protocol do? One of the issues is surely the way our router-system structure is in place... if there was going to be a replacement for the web, it would *have* to end up being properly web based (down to the packet level), surely... because I simply hate the fact that if three people in my house request the front page of the financial times, our computers all have to go get it separately. Why don't the other two get it off the first one, or at the very least, off the router?
I really think it's necessary to have a disconnect between intention and implementation. That would let us be clear about best practices for implementation in connection with a particular intention. As computer programmers, we rarely focus on the separation between the two, but they're intimately related and yet also quite definitely separate. Separating them allows us to allow for differences and allows us to be accepting of varying methods.
For example... my intention perhaps is to make coffee... I have my way of making it with my espresso machine that I really love. I have a house mate who loves his coffee a certain way. I don't like it when it's made like that - that's his implementation of his intention to enjoy a cup of coffee, but I know how to make it so he loves it, actually possibly better than HE can make it (wow that's an interesting thing isn't it?) because I understand his intention (he loves his coffee with just this certain balance of coffee, sugar, water and milk) and I know this from making him many cups of coffee... but we both have different implementations of making him a cup of coffee. I use an espresso machine, he uses a filter machine. If he asked me to make it using his filter machine, I could easily do that... and yet still have the same intention - to make him a cup of excellent coffee... :)
Yes. Really nobody cares if you doing things well.
But it is temporary, and only if you limit the number of criteria for
comparing two different approaches. As our life goes, tomorrow you
might find that your way of doing coffee consumes a lot more energy
than another one.
And since we want to conserve the energy (to delay a global oil
crisis) , the other approach will be more preferable.
Post by Julian Leviston
Do you see? Now, we don't have this in computing. We need it desperately, because computers can actually mostly handle implementations quite well (see LLVM) so long as they have their intentions carefully communicated to them.
Same for computing i think: at some point you don't care how well
things implemented, as long as they server your needs. But in a longer
perspective as we demanding higher and higher quality standards from
products we use,
one of them will win, unless you change the old one to meet the same standards.
Post by Julian Leviston
We don't even have languages of intention - just languages of implementation. We're left to "abstract out" the intention from reading the implementation.
WHAT A FUCKING JOKE. (Please excuse the swearing - it's there for extreme emphasis, not rudeness),
Here's another interesting thing... If I go to a hotdog vendor, this is one method of satisfying my hunger... I can call this an implementation of satisfying my hunger. Now, I can also have different implementations of going to a hotdog vendor... (do you see where I'm going with this?) Intention and implementation are recursive, overlapping enclosing concerns.
If my intention is to satisfy my hunger and nothing more, then a hotdog vendor would do just as well as a five star restaurant. Perhaps even more so because I don't have to follow any protocols on HOW I eat... if my intention is to satisfy my hunger AND to give my body a wonderful set of nutrients, then perhaps I cant go to the hotdog vendor anymore, but a different set of options (implementations) arise...
I "throw" this stuff out there, but this is the most important thought work I've done in my entire life. I think it's vastly important, and most humans ignore it entirely.
Thanks for philosophical departure in discussion :)
Post by Julian Leviston
Julian.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
Best regards,
Igor Stasenko AKA sig.
David Barbour
2011-07-26 04:57:26 UTC
Permalink
Post by Julian Leviston
I guess my question is... what's stopping an alternative, replacement,
backwardly-compatible protocol from taking over where http and https

leave off?


HTTP and HTTPS are not very good protocols if your goals relate to
low-latency, security, and composition.



And what would that protocol do?


Here's what the protocol I'm working on would do:

* chord/pastry/tapestry distributed replacement for DNS (free us from
ICANN; easier configuration)
* identifiers for hosts = secure hash of RSA public key (easy validation,
ortho. to trust)
* logical connections (easier composition, independent disruption, potential
'restore' and use cache)
* logical objects - flat, usually opaque object identifier (favor object
capability security idioms)
* extensible protocol (just add objects); supports new overlays and network
abstractions.
* efficient orchestration; forward responses multiple steps without
centralized routing
* wait-free idioms, i.e. 'install' a new object then start using it - new
object references created locally
* reactive behaviors: focus on models involving continuous queries or
control.
* batching semantics - send multiple updates then 'apply' all at once.
* temporal semantics - send updates that apply in future.



One of the issues is surely the way our router-system structure is in
Post by Julian Leviston
place... if there was going to be a replacement for the web, it would *have*
to end up being properly web based (down to the packet level)
I think if we replaced the protocols for the web, we'd still call it 'the
web', and it would therefore still be 'web-based'. ;-)

But we don't need to follow the same protocols we currently do.
Post by Julian Leviston
I simply hate the fact that if three people in my house request the front
page of the financial times, our computers all have to go get it separately.
Why don't the other two get it off the first one, or at the very least, off
the router?
You're rather stingy with bandwidth. Maybe you should try a Squid
server. ;-)

Support for ad-hoc content distribution networks is designed into my
reactive demand programming model.
Post by Julian Leviston
We don't even have languages of intention - just languages of
implementation. We're left to "abstract out" the intention from reading the
implementation.
Have you ever used an executable specification language (such as Maude or
Coq)?

Regards,

Dave
Yoshiki Ohshima
2011-07-25 15:40:09 UTC
Permalink
Well said, Igor!

At Mon, 25 Jul 2011 16:03:57 +0200,
Post by Igor Stasenko
It is really a pity, that we have a systems which has no trust from users side.
Following the logic, maybe making provably secure system is not
enough but it still takes some luck?

-- Yoshiki
Igor Stasenko
2011-07-25 16:13:09 UTC
Permalink
 Well said, Igor!
At Mon, 25 Jul 2011 16:03:57 +0200,
Post by Igor Stasenko
It is really a pity, that we have a systems which has no trust from users side.
 Following the logic, maybe making provably secure system is not
enough but it still takes some luck?
As Alan mentioned, there was at least one great example of system
which has this potential: B5000
(http://en.wikipedia.org/wiki/Burroughs_large_systems).

But today's systems are still not there. Why?

I think the answer would be same as for why today we have to use these
strange and flawed technologies like javascript or PHP:
a random picks of different choices made by people over the years, and
most of them were based on minute needs (or even worse - marketing)
rather than
conscious, based on serious evaluation and study.
Apparently those techs has nothing to do with computer science,
because if you will do it at first place, then you will never end up
with things like PHP :)

So, it is really "And so it goes".
--
Best regards,
Igor Stasenko AKA sig.
Thiago Silva
2011-07-25 20:00:04 UTC
Permalink
Post by Igor Stasenko
But i think this is a general problem of software evolution. No matter
how hard you try, you cannot foresee all kinds of interactions,
features and use cases for your system, when you designing it from the
beginning.
Because 20 years ago, systems has completely different requirements,
comparing to today's ones. So, what was good enough 20 years ago,
today is not very good.
And here the problem: is hard to radically change the software,
especially core concepts, because everyone using it, get used to it ,
because it made standard.
So you have to maintain compatibility and invent workarounds , patches
and fixes on top of existing things, rather than radically change the
landscape.
Now, why is it hard to radically change the software?

Is it the failure to foresee all kinds of interactions that creates the
problems? Maybe is not what we are leaving behind in the design of the
solution, but what the design assumes (whether we are aware or not): the
hundreds and hundreds of little assumptions that have no relation with the
actual solution description...

Take imperative instructions: when writing a solution in an imperative
language, we are imposing chronological order to the instructions even when
that particular ordering is not a requirement of the solution.

So, we are not called up to change the software when the solution changes. We
are called up when something, anything changes and breaks any of the
assumptions carried by the software. We seem to be writing software that
doesn't appear to be so soft...



Cheers,
Thiago
Dethe Elza
2011-07-25 15:01:58 UTC
Permalink
For example, some of our next version of Etoys for children could be done in JS, but not all -- e.g. the Kedama massively parallel programmable particle system made by Yoshiki cannot be implemented to run fast enough in JS. It needs something much faster and lower level -- and this something has not existed until the Chrome native client (and this only in Chrome which is only about 11% penetrated).
You don't have to wait for Chrome Native Client to have native levels of performance. Most of the current crop of browsers (i.e. not IE) use tracing JIT compilers to get close to native performance (in this experiment writing a CPU emulator in JS, one emulated instruction took approximately 20 native instructions: http://weblogs.mozillazine.org/roc/archives/2010/11/implementing_a.html). Javascript is fast and getting faster, with array operations coming soon and Web Workers for safe parallelism (purely message-based threads) available now.

You can play 3D shooters, edit video, synthesize audio, and run Linux on an emulated CPU in Javascript. I'm not sure what part of that is not fast enough.

Some of it is cruft and some of it is less than elegant. But having higher level primitives (like what SVG and Canvas provide) isn't all bad.

--Dethe
Igor Stasenko
2011-07-25 16:25:03 UTC
Permalink
Post by Dethe Elza
For example, some of our next version of Etoys for children could be done in JS, but not all -- e.g. the Kedama massively parallel programmable particle system made by Yoshiki cannot be implemented to run fast enough in JS. It needs something much faster and lower level -- and this something has not existed until the Chrome native client (and this only in Chrome which is only about 11% penetrated).
You don't have to wait for Chrome Native Client to have native levels of performance. Most of the current crop of browsers (i.e. not IE) use tracing JIT compilers to get close to native performance (in this experiment writing a CPU emulator in JS, one emulated instruction took approximately 20 native instructions: http://weblogs.mozillazine.org/roc/archives/2010/11/implementing_a.html). Javascript is fast and getting faster, with array operations coming soon and Web Workers for safe parallelism (purely message-based threads) available now.
You can play 3D shooters, edit video, synthesize audio, and run Linux on an emulated CPU in Javascript. I'm not sure what part of that is not fast enough.
Some of it is cruft and some of it is less than elegant. But having higher level primitives (like what SVG and Canvas provide) isn't all bad.
But don't you see a problem:
it evolving from simple 'kiddie' scripting language into a full
fledged system.

It is of course a good direction and i welcome it. But how different
our systems would be, if guys who started it 20 years back would think
a bit about future?
Why all those "emerging" technologies is just reproducing the same
which were available for desktop apps for years?
Doesn't it rings a bell that it is something fundamentally wrong with
this technology?
--
Best regards,
Igor Stasenko AKA sig.
Wesley Smith
2011-07-25 16:29:45 UTC
Permalink
Post by Igor Stasenko
Why all those "emerging" technologies is just reproducing the same
which were available for desktop apps for years?
Doesn't it rings a bell that it is something fundamentally wrong with
this technology?
Which technology? The technical software one or the human
organization social one?
Igor Stasenko
2011-07-25 16:51:34 UTC
Permalink
Post by Igor Stasenko
Why all those "emerging" technologies is just reproducing the same
which were available for desktop apps for years?
Doesn't it rings a bell that it is something fundamentally wrong with
this technology?
Which technology?  The technical software one or the human
organization social one?
I think both.
But since i am technician i can clearly tell is that javascript is failure.

Why 20 years ago we weren't able to use script to draw something
directly on page?
It took 20 years for this particular software to evolve to do
something ridiculously basic..
What a great progress we made! :)

Why things like Lively Kernel (http://www.lively-kernel.org/) is
possible to deliver only today?
It's really strikes me.
--
Best regards,
Igor Stasenko AKA sig.
Dethe Elza
2011-07-25 17:01:09 UTC
Permalink
Post by Igor Stasenko
it evolving from simple 'kiddie' scripting language into a full
fledged system.
First off, JS was done in a hurry, but by Brendan Eich who was hired by Netscape because he had implemented languages before and knew something about what he was doing (and could work fast). JS itself had a marketing requirement to be have C-like syntax (curly braces), but the language itself was influenced more by Self and Lisp than any of the C lineage.

And the JS we use today has been evolving (what's wrong with evolving?) since 1995. What is in browsers today was not designed in 10 days, it has been beaten through the wringer of day to day use, standardization processes, and deployment in an extremely wide range of environments. That doesn't make it perfect, and I'm not saying it doesn't have it's warts (it does), but to disparage it as "kiddie scripting" reeks to me of trolling, not discussion.
Post by Igor Stasenko
It is of course a good direction and i welcome it. But how different
our systems would be, if guys who started it 20 years back would think
a bit about future?
I don't think we would even be having this discussion if they didn't think about the future, and I think they've spent the intervening years continuing to think about (and implement) the future.
Post by Igor Stasenko
Why all those "emerging" technologies is just reproducing the same
which were available for desktop apps for years?
Security, for one. Browsers (and distributed systems generally) are a hostile environment and the ability to run arbitrary code on a user's machine has to be tempered by not allowing rogue code to erase their files or install a virus. In the meantime, desktops have also become distributed systems, and browser technology is migrating into the OS. That's not an accident.
Post by Igor Stasenko
Doesn't it rings a bell that it is something fundamentally wrong with
this technology?
Well, I doubt we could name a technology there isn't something fundamentally wrong with. I've been pushing Javascript as far as I could for more than a decade now. Browsers (and JS) really were crap back then, no doubt about it. But they are starting to become a decent foundation in the past couple of years, with more improvements to come. And there is something to be said for a safe language with first-class functions that is available anywhere a web browser can run (and further).

Anyhow, not going to spend more time defending JS. Just had to put in my $0.02 CAD.

--Dethe
Igor Stasenko
2011-07-25 18:20:49 UTC
Permalink
Post by Dethe Elza
Post by Igor Stasenko
it evolving from simple 'kiddie' scripting language into a full
fledged system.
First off, JS was done in a hurry, but by Brendan Eich who was hired by Netscape because he had implemented languages before and knew something about what he was doing (and could work fast). JS itself had a marketing requirement to be have C-like syntax (curly braces), but the language itself was influenced more by Self and Lisp than any of the C lineage.
And the JS we use today has been evolving (what's wrong with evolving?) since 1995. What is in browsers today was not designed in 10 days, it has been beaten through the wringer of day to day use, standardization processes, and deployment in an extremely wide range of environments. That doesn't make it perfect, and I'm not saying it doesn't have it's warts (it does), but to disparage it as "kiddie scripting" reeks to me of trolling, not discussion.
There was no intent of any disrespect or disparage.
For me, its a fact that the original implementation were started (as
many other popular projects) in a form of kiddie scripting and then
evolved into something bigger/better.

After all, a starting point defines the way you go.
Post by Dethe Elza
Post by Igor Stasenko
It is of course a good direction and i welcome it. But how different
our systems would be, if guys who started it 20 years back would think
a bit about future?
I don't think we would even be having this discussion if they didn't think about the future, and I think they've spent the intervening years continuing to think about (and implement) the future.
Post by Igor Stasenko
Why all those "emerging" technologies is just reproducing the same
which were available for desktop apps for years?
Security, for one. Browsers (and distributed systems generally) are a hostile environment and the ability to run arbitrary code on a user's machine has to be tempered by not allowing rogue code to erase their files or install a virus. In the meantime, desktops have also become distributed systems, and browser technology is migrating into the OS. That's not an accident.
Yeah.. And the only difference i see today in systems is before
running a downloaded executable a system asking "are you sure you want
to run something downloaded from internet?".
So, we're still not there. Our systems are still not as secure as we
want them to be (otherwise why asking user such kind of questions?).
:)
Benoît Fleury
2011-07-25 18:47:31 UTC
Permalink
"So, i think it is more a lack of vision, than technical/security issues."

There might not have been a technical vision in the www but there is I
think a political statement which is that the information must be
open. Papers like "The Rule of Least Power" [1] make it very clear.
This is, in my opinion, the essence of the web and companies like
Google are built on it.

I think we're moving away today from this vision with technologies
like HTML5/JavaScript to respond to the application model of the
iPhone/iPad (more "business friendly"). I don't know if it will allow
us to keep this open philosophy or not.

- Benoit


[1] http://www.w3.org/2001/tag/doc/leastPower.html
Post by Igor Stasenko
Post by Dethe Elza
Post by Igor Stasenko
it evolving from simple 'kiddie' scripting language into a full
fledged system.
First off, JS was done in a hurry, but by Brendan Eich who was hired by Netscape because he had implemented languages before and knew something about what he was doing (and could work fast). JS itself had a marketing requirement to be have C-like syntax (curly braces), but the language itself was influenced more by Self and Lisp than any of the C lineage.
And the JS we use today has been evolving (what's wrong with evolving?) since 1995. What is in browsers today was not designed in 10 days, it has been beaten through the wringer of day to day use, standardization processes, and deployment in an extremely wide range of environments. That doesn't make it perfect, and I'm not saying it doesn't have it's warts (it does), but to disparage it as "kiddie scripting" reeks to me of trolling, not discussion.
There was no intent of any disrespect or disparage.
For me, its a fact that the original implementation were started (as
many other popular projects) in a form of kiddie scripting and then
evolved into something bigger/better.
After all, a starting point defines the way you go.
Post by Dethe Elza
Post by Igor Stasenko
It is of course a good direction and i welcome it. But how different
our systems would be, if guys who started it 20 years back would think
a bit about future?
I don't think we would even be having this discussion if they didn't think about the future, and I think they've spent the intervening years continuing to think about (and implement) the future.
Post by Igor Stasenko
Why all those "emerging" technologies is just reproducing the same
which were available for desktop apps for years?
Security, for one. Browsers (and distributed systems generally) are a hostile environment and the ability to run arbitrary code on a user's machine has to be tempered by not allowing rogue code to erase their files or install a virus. In the meantime, desktops have also become distributed systems, and browser technology is migrating into the OS. That's not an accident.
Yeah.. And the only difference i see today in systems is before
running a downloaded executable a system asking "are you sure you want
to run something downloaded from internet?".
So, we're still not there. Our systems are still not as secure as we
want them to be (otherwise why asking user such kind of questions?).
:)
David Harris
2011-07-25 17:30:32 UTC
Permalink
"Those who cannot remember the past are condemned to repeat it."

George Santayana, who, in his Reason in Common Sense, The Life of Reason,
Vol.1


This certainly rings true in computer science. Great things were done in
the 60s and 70s which we seem to ignore. The Burroughs machines B5000 ...,
SketchPad, Smalltalk, Self, ...

I am always amazed that microcomputers revisted all the sins of computers
and minicomputers (segmented memory, ...), and newer software seems to be
just a new layering of jargon.

Good design requires a good knowledge of history, great design needs more.

Fonc appears to be a refreshing in depth look at what we are doing.

David
Post by Dethe Elza
Post by Alan Kay
For example, some of our next version of Etoys for children could be
done in JS, but not all -- e.g. the Kedama massively parallel programmable
particle system made by Yoshiki cannot be implemented to run fast enough in
JS. It needs something much faster and lower level -- and this something has
not existed until the Chrome native client (and this only in Chrome which is
only about 11% penetrated).
Post by Dethe Elza
You don't have to wait for Chrome Native Client to have native levels of
performance. Most of the current crop of browsers (i.e. not IE) use tracing
JIT compilers to get close to native performance (in this experiment writing
a CPU emulator in JS, one emulated instruction took approximately 20 native
http://weblogs.mozillazine.org/roc/archives/2010/11/implementing_a.html).
Javascript is fast and getting faster, with array operations coming soon and
Web Workers for safe parallelism (purely message-based threads) available
now.
Post by Dethe Elza
You can play 3D shooters, edit video, synthesize audio, and run Linux on
an emulated CPU in Javascript. I'm not sure what part of that is not fast
enough.
Post by Dethe Elza
Some of it is cruft and some of it is less than elegant. But having
higher level primitives (like what SVG and Canvas provide) isn't all bad.
it evolving from simple 'kiddie' scripting language into a full
fledged system.
It is of course a good direction and i welcome it. But how different
our systems would be, if guys who started it 20 years back would think
a bit about future?
Why all those "emerging" technologies is just reproducing the same
which were available for desktop apps for years?
Doesn't it rings a bell that it is something fundamentally wrong with
this technology?
--
Best regards,
Igor Stasenko AKA sig.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
David Barbour
2011-07-25 19:59:16 UTC
Permalink
how different our systems would be, if guys who started it 20 years back
would think a bit about future?
The guys who spend their time thinking about it lose, just as they always
do. Worse is better wins on the market. Brendan Eich was right to fear
something even worse than his rapidly hacked brainstorm child - i.e. if it
were not JavaScript/EcmaScript, we might be using proprietary VBScript from
Microsoft.

Do you remember those battles between behemoths trying to place proprietary
technologies in our browsers? I do. 'Embrace and extend' was a strategy
discussed and understood even in grade school. I'm a bit curious whether
Google will be facing an EOLAS patent suit for NaCl, or whether that
privilege will go to whomever uses NaCl and WebSockets to connect browsers
together.

It is interesting to see JS evolve in non-backwards-compatible ways to help
eliminate some of the poisons of its original design - eliminating the
global namespace, dropping callee/caller/arguments, development of a true
module system that prevents name shadowing and allows effective caching, and
so on. Mark Miller, who has performed significant work on object capability
security, has also started to shape JavaScript to make it into a moderately
sane programming language... something that could be used as a more
effective compilation target for other languages.
BGB
2011-07-25 22:20:03 UTC
Permalink
how different our systems would be, if guys who started it 20
years back would think a bit about future?
The guys who spend their time thinking about it lose, just as they
always do. Worse is better wins on the market. Brendan Eich was right
to fear something even worse than his rapidly hacked brainstorm child
- i.e. if it were not JavaScript/EcmaScript, we might be using
proprietary VBScript from Microsoft.
or, that happens...

then later ends up disabled by default as MS can't manage to prevent
computer from being "pwnt" ("owned") by viruses.

later on someone else recreates web-scripting in a more "secure" form
using a 3rd-party browser plugin which gives a semi-programmatic
interface based on XSLT and regexes.

...
Do you remember those battles between behemoths trying to place
proprietary technologies in our browsers? I do. 'Embrace and extend'
was a strategy discussed and understood even in grade school. I'm a
bit curious whether Google will be facing an EOLAS patent suit for
NaCl, or whether that privilege will go to whomever uses NaCl and
WebSockets to connect browsers together.
although NaCl is an interesting idea, the fact that it was not
originally designed to be binary-compatible between targets is a drawback.

it is sad though that there is no really "good" compromise between
higher level VMs (Flash, .NET, JVM, ...) and sandboxed native code.


and, no one was like: "hell, why don't we just write a VM that allows us
to run sandboxed C and C++ apps in a browser" (probably with some added
metadata and a validation system).

an example would be, say, if the VM didn't allow accessing external
memory via forged pointers, ...
It is interesting to see JS evolve in non-backwards-compatible ways to
help eliminate some of the poisons of its original design -
eliminating the global namespace, dropping callee/caller/arguments,
development of a true module system that prevents name shadowing and
allows effective caching, and so on. Mark Miller, who has performed
significant work on object capability security, has also started to
shape JavaScript to make it into a moderately sane programming
language... something that could be used as a more effective
compilation target for other languages.
fair enough.

too bad there is no standardized bytecode or anything though, but then I
guess it would at this point be more like browser-integrated Flash or
something, as well as be potentially more subject to awkward versioning
issues, or the bytecode ends up being lame/inflexible/awkward/...

or such...
David Barbour
2011-07-25 23:28:20 UTC
Permalink
Post by BGB
too bad there is no standardized bytecode or anything though, but then I
guess it would at this point be more like browser-integrated Flash or
something, as well as be potentially more subject to awkward versioning
issues, or the bytecode ends up being lame/inflexible/awkward/...
Bytecode is a *bad* idea - all they ever do is reduce our ability to reason
about, secure, and optimize code. Bytecodes have not achieved proposed
cross-language benefits - i.e. they tend to be very language specific
anyway, so you might as well compile to an intermediate application
language.

If you want compiled JavaScript, try Google's "Closure" compiler (JavaScript
-> JavaScript).

But I do agree that JavaScript is not an ideal target for compilation!
BGB
2011-07-26 06:16:07 UTC
Permalink
Post by BGB
too bad there is no standardized bytecode or anything though, but
then I guess it would at this point be more like
browser-integrated Flash or something, as well as be potentially
more subject to awkward versioning issues, or the bytecode ends up
being lame/inflexible/awkward/...
Bytecode is a *bad* idea - all they ever do is reduce our ability to
reason about, secure, and optimize code. Bytecodes have not achieved
proposed cross-language benefits - i.e. they tend to be very language
specific anyway, so you might as well compile to an intermediate
application language.
well, there are pros and cons.

pros:
more compact;
better at hiding ones' source code (decompilers are necessary);
can be executed directly if using an interpreter (no parser/... needed);
...

cons:
often less flexible than the source language;
lots of capabilities may require a decent number of opcodes (say, 500 to
1000);
are typically language specific;
are often sensitive to version issues (absent special care, which often
leads to cruft);
are generally VM-specific;
...
Post by BGB
If you want compiled JavaScript, try Google's "Closure" compiler
(JavaScript -> JavaScript).
But I do agree that JavaScript is not an ideal target for compilation!
the main merit of a bytecode format is that it could shorten the path in
getting to native code, potentially allowing it to be faster.

note that having a bytecode does not preclude having 'eval()' and
similar (in fact, most VMs with eval tend to at least internally use
bytecode anyways).

even my C compiler internally used a bytecode at one stage, albeit for
historical reasons, a textual representation of the IL was used between
the frontend and backend.

reason: initially I created a textual IL mostly to allow me to more
easily test the codegen, but initially I wrote the frontend and backend
separately, and ran into a bit of a problem: they didn't fit together.
so, I modified the frontend to spit out the textual format instead of
raw bytecode, and problem fixed (in the backend, the textual format was
converted relatively directly into the bytecode format).

however, I have traditionally not had a serialized/canonical bytecode
format (loading things from source has generally been more common). my
current bytecode loading/saving (for BGBScript) has not been well tested
nor is necessarily even a stable format (it is based on using the binary
data serialization mechanism).


not that it all needs to be "one or the other" though.


or such...
David Barbour
2011-07-26 07:29:16 UTC
Permalink
Post by BGB
well, there are pros and cons.
more compact;
better at hiding ones' source code (decompilers are necessary);
can be executed directly if using an interpreter (no parser/... needed);
...
Counters:
* We can minify source, or zip it. There are tools that do this for
JavaScript.
* Hiding code is usually a bad thing. A pretense of security is always a bad
thing. But, if someone were to insist on an equal capability, I'd point them
at an 'obfuscator' (such as http://javascriptobfuscator.com/default.aspx). A
tool dedicated to befuddling your users will do a much better job in this
role than a simple bytecode compiler.
* we rarely execute bytecode directly; there is a lot of non-trivial setup
for linking and making sure we call the right codes.

Besides, the real performance benefits come from compiling the code - even
bytecode is typically JIT'd. Higher-level source can allow more effective
optimizations, especially across library boundaries. We'll want to cache and
reuse the compiled code, in a format suitable for immediate execution.
JavaScript did a poor job here due to its lack of a module system (to
prevent name shadowing and such), but they're fixing that for ES.next.
Post by BGB
the main merit of a bytecode format is that it could shorten the path in
getting to native code, potentially allowing it to be faster.
Well, it is true that one might save a few cycles for a straightforward
conversion.

The use of a private IL by a compiler isn't the same. You aren't forced to
stabilize a private IL the way you need to stabilize the JVM ops.

Regards,

Dave
BGB
2011-07-26 08:50:13 UTC
Permalink
Post by BGB
well, there are pros and cons.
more compact;
better at hiding ones' source code (decompilers are necessary);
can be executed directly if using an interpreter (no parser/... needed);
...
* We can minify source, or zip it. There are tools that do this for
JavaScript.
* Hiding code is usually a bad thing. A pretense of security is always
a bad thing. But, if someone were to insist on an equal capability,
I'd point them at an 'obfuscator' (such as
http://javascriptobfuscator.com/default.aspx). A tool dedicated to
befuddling your users will do a much better job in this role than a
simple bytecode compiler.
* we rarely execute bytecode directly; there is a lot of non-trivial
setup for linking and making sure we call the right codes.
typically, deflate+bytecode works a little better than deflate+source.
either way, yes, code usually compresses down fairly well.

however, whether or not compiling to bytecode is itself an actually
effective security measure, it is the commonly expected security measure.

also, many people just expect to distribute their programs as a
precompiled binaries (rather than, say, as a glorified ZIP of
source-files or similar).

a "compiler" may be expected (as part of "the process") even if it could
be technically more correctly called an archiver or similar (people may
not let go of the established process easily).
Post by BGB
Besides, the real performance benefits come from compiling the code -
even bytecode is typically JIT'd. Higher-level source can allow more
effective optimizations, especially across library boundaries. We'll
want to cache and reuse the compiled code, in a format suitable for
immediate execution. JavaScript did a poor job here due to its lack of
a module system (to prevent name shadowing and such), but they're
fixing that for ES.next.
it depends on if/when the JIT is done.

for example, if a given bytecode block needs to be execute, say, 1000 or
10000 instructions before the JIT is triggered on it (otherwise, an
interpreter is used), then generally the JIT issue is less of an issue,
as there may be much code (in libraries, ...) which never gets run
through the JIT (or potentially even ever executed).

granted, yes, there are different ways to approach JIT (whether or not
to inline things, blocks vs traces, ...).
Post by BGB
the main merit of a bytecode format is that it could shorten the
path in getting to native code, potentially allowing it to be faster.
Well, it is true that one might save a few cycles for a
straightforward conversion.
well, it may matter depending some on the amount of work done by the JIT.

for example, a fixed-form single-pass or 2-pass translator (generally
using direct procedural logic to emit machine code) may be comparably
faster than one which uses more elaborate transformations (such
internally converting to SSA form or using a multi-stage conversion ...).


also, depending on language it may matter:
for example, in my C compiler, the majority of the running time was
actually used up by the preprocessor and parser (mostly the fault of
headers), and so a bytecode-based format would make a good deal more sense.

for ECMAScript-family languages (my own BGBScript language could be
included here) it is less of an issue, since there are no headers and
the syntax is relatively straightforward to parse quickly (even without
micro-optimizing the parser).
Post by BGB
The use of a private IL by a compiler isn't the same. You aren't
forced to stabilize a private IL the way you need to stabilize the JVM
ops.
yes, fair enough...

but, I guess the question that can be made is whether or not the
bytecode is intended to be a stable distribution format (of the same
sort as JBC or CIL), or intended more as a transient format which may
depend somewhat on the currently running VM (and may change from one
version to the next).

there may be room for a VM to partly expose an unstable IL, potentially
with APIs and conventions in place to reduce the risk of code breaking
due to internal changes to the IL.
David Barbour
2011-07-26 16:05:09 UTC
Permalink
whether or not compiling to bytecode is itself an actually effective
security measure, it is the commonly expected security measure.
Is it? I've not heard anyone speak that way in many, many years. I think
people are getting used to JavaScript.
a "compiler" may be expected (as part of "the process") even if it could be
technically more correctly called an archiver or similar (people may not let
go of the established process easily).
We can benefit from 'compilers' even if we distribute source. For example,
JavaScript->JavaScript compilers can optimize code, eliminate dead code,
provide static warnings, and so on. We should also be able to compile other
languages into the distribution language. I don't mind having a compiler be
part of 'the process'. The issue regards the distribution language, not how
you reach it.

That said, it would often be preferable to distribute source, and use a
library or module to parse and compile it. This would allow us to change our
implementation without redistributing our intention. A language with good
support for 'staging' would be nice.
granted, yes, there are different ways to approach JIT (whether or not to
inline things, blocks vs traces, ...).
Hotspot, too. It is possible to mix interpretation with compilation.
Agreed. We certainly should *design* the distribution language with an eye
on distribution, not just pick an arbitrary language.
but, I guess the question that can be made is whether or not the bytecode
is intended to be a stable distribution format (of the same sort as JBC or
CIL), or intended more as a transient format which may depend somewhat on
the currently running VM (and may change from one version to the next).
We should not tie our users to a particular distribution of the VM. If you
distribute bytecode, or any language, it really should be stable, so that
other people can compete with the implementation.

Regards,

David
BGB
2011-07-26 22:28:33 UTC
Permalink
Post by BGB
whether or not compiling to bytecode is itself an actually
effective security measure, it is the commonly expected security measure.
Is it? I've not heard anyone speak that way in many, many years. I
think people are getting used to JavaScript.
for web-apps maybe, but it will likely be a long time before it becomes
adopted by commercial application software (where the source-code is
commonly regarded as a trade-secret).
Post by BGB
a "compiler" may be expected (as part of "the process") even if it
could be technically more correctly called an archiver or similar
(people may not let go of the established process easily).
We can benefit from 'compilers' even if we distribute source. For
example, JavaScript->JavaScript compilers can optimize code, eliminate
dead code, provide static warnings, and so on. We should also be able
to compile other languages into the distribution language. I don't
mind having a compiler be part of 'the process'. The issue regards the
distribution language, not how you reach it.
yes, but why do we need an HLL distribution language, rather than, say,
a low-level distribution language, such as bytecode or a VM-level
ASM-like format, or something resembling Forth or PostScript?...
Post by BGB
That said, it would often be preferable to distribute source, and use
a library or module to parse and compile it. This would allow us to
change our implementation without redistributing our intention. A
language with good support for 'staging' would be nice.
potentially.
Post by BGB
granted, yes, there are different ways to approach JIT (whether or
not to inline things, blocks vs traces, ...).
Hotspot, too. It is possible to mix interpretation with compilation.
yeah. my present assumed strategy is to assume mixed compilation and
interpretation.

back with my C compiler, I tried to migrate to a pure-compilation
strategy (there was no interpreter, only the JIT). this ultimately
created far more problems than it solved.

the alternative was its direct ancestor, a prior version of my BGBScript
VM, which at the time had used a combined interpreter+JIT strategy
(sadly, for later versions the JIT has broken, as the VM has been too
much in flux and I haven't kept up on keeping it working).
Post by BGB
Agreed. We certainly should *design* the distribution language with an
eye on distribution, not just pick an arbitrary language.
yeah. such a language should be capable of a wide range of languages and
semantics.


a basic model which has been working acceptably in my case can be
described roughly as:
sort of like PostScript but also with labels and conditional jumps.

pretty much the entire program representation can be in terms of blocks
and a stack machine.
Post by BGB
but, I guess the question that can be made is whether or not the
bytecode is intended to be a stable distribution format (of the
same sort as JBC or CIL), or intended more as a transient format
which may depend somewhat on the currently running VM (and may
change from one version to the next).
We should not tie our users to a particular distribution of the VM. If
you distribute bytecode, or any language, it really should be stable,
so that other people can compete with the implementation.
what I meant may have been misinterpreted.

it could be restated more as: should the bytecode even be used for
program distribution?

if not, then it can be used more internal to the VM and languages
running on the VM, such as for implementing lightweight eval mechanisms
for other languages, ...

hence "currently running VM", basically in this sense meaning "which VM
are we running on right now?". if done well, a program, such as a
language compiler, can target the underlying VM without getting tied too
much into how the VM's IL works, allowing both some level of portability
for the program, as well as reasonably high performance and flexibility
for the VM to change its IL around as-needed (or potentially bypass the
IL and send the code directly to native code).

most likely though, the above would largely boil down to emitting code
via an API.
granted, yes, there are good and bad points to API-driven code generation.

an analogy would be something "sort of like OpenGL, but for compilers".

(side note:
actually, at the moment the thought of an OpenGL-like codegen interface
seems interesting. but I am thinking more in the context of using it as
a means of emitting native code. however, sadly, most of my prior
attempts at separating the codegen from the higher-level IL mechanics,
... have not gone well. ultimately some "structure" may be necessary. )


as for the alternative case, note ARM:
ARM and Thumb machine code are often used as distribution formats.

however, ARM is also fairly model specific, and so code intended for one
processor model may not work on another, and code for an earlier
processor may not work on a later one, ...

yet, in general, it has been doing fairly well market-wise.

(in general I am left with a bit of mixed feelings WRT ARM in general,
although it does a few things well, I would personally still rather live
in a world based on x86...).

or such...
Casey Ransberger
2011-07-27 00:45:04 UTC
Permalink
whether or not compiling to bytecode is itself an actually effective security measure, it is the commonly expected security measure.
Is it? I've not heard anyone speak that way in many, many years. I think people are getting used to JavaScript.
for web-apps maybe, but it will likely be a long time before it becomes adopted by commercial application software (where the source-code is commonly regarded as a trade-secret).
Worth pointing out that server side JS dodges this "problem." Now that Node is out there, people are actually starting to do stuff with JS that doesn't run on the client, so it's happening... whether or not it's a real qualitative improvement for anyone.
David Goehrig
2011-07-27 13:37:58 UTC
Permalink
Post by Casey Ransberger
Worth pointing out that server side JS dodges this "problem." Now that Node is out there, people are actually starting to do stuff with JS that doesn't run on the client, so it's happening... whether or not it's a real qualitative improvement for anyone.
Well considering that Netscape Enterprise Server 2.0 ca. 1996 (15 years ago btw) did server side JavaScript. We've seen no change at all. I also say this having built several businesses on to of a C10k style server with server side JavaScript for the past 8 years (a contemporary of nginx).

The lessons we collective fail to learn is that survival is not always of the fittest, just those most realized potential of changing.

If I've learned anything from trying to reimplement Self in JavaScript it is that JavaScript is immensely fungible. I have in a few lines of code working variants of lisp and forth and in a few more lines will finish Self. Since I've written x86 assemblers in JS, I'm certain I could turn any of these into a self hosting environment in under 1kloc.

Evolving JS is a pragmatic approach to evolving the ecosystem. And while technically nothing prevents people from building entire operating systems in JavaScript, culturally we are adverse to it. (and since Linux boots in a js x86 emulator I don't buy any argument against the technical aspect)

The greatest danger we face is cultural tendency to rarify programming, math, and computer science. For a simple system to be widely adopted, it must conform to existing cultural constraints to seem familiar, but only so much.

_('tk')
('does:','text:', 's | HTML("element:","span")("contains:", s, 0, s.length)')
('does:','image:', 'u | HTML("element:","img")("src:",u)')
('does:','sound:', 'u | HTML("element:","audio")("src:",u)')
('does:','video:', 'u | HTML("element:","video")("src:",u)')
('does:','box:', '| HTML("element:","div")')
Is straight JavaScript. But it looks a little like lisp, a little like smalltalk, and borrowed from self and forth.
I bet you can guess what it does by reading it. How it does it though would only be obvious to a JS expert. But the GUI that makes use of it allows a child to build web pages procedurally.
I doubt 10% of programmers will ever be able to grasp the concept of building grammars to program. But with the right useful abstractions, the right geometry of objects, they won't have to. We are still a long way away from having a circularly linked directed graph as a fundamental datatype, or an n-dimensional mapping operator that sends messages to past states of programs and returns what the results would have been. Hell people still type linear text in vi.

Dave
BGB
2011-07-27 17:40:48 UTC
Permalink
On Jul 26, 2011, at 8:45 PM, Casey Ransberger
Post by Casey Ransberger
Worth pointing out that server side JS dodges this "problem." Now
that Node is out there, people are actually starting to do stuff with
JS that doesn't run on the client, so it's happening... whether or
not it's a real qualitative improvement for anyone.
Well considering that Netscape Enterprise Server 2.0 ca. 1996 (15
years ago btw) did server side JavaScript. We've seen no change at
all. I also say this having built several businesses on to of a C10k
style server with server side JavaScript for the past 8 years (a
contemporary of nginx).
BTW: I am not against JS, as, hell, my own scripting language is based
on it (and tries for ECMA-262 conformance, albeit it differs on a few
points).

in my case, its use is more intended for standalone desktop-style apps.
The lessons we collective fail to learn is that survival is not always
of the fittest, just those most realized potential of changing.
I think "fitness" and "merit" are some often misunderstood ideas.

some people seem to see it like all of the "good" solutions die and fade
away, but often most options which die/fade have problems in other
areas, and people end up overlooking important parts of the problem
space (one person overlooks economic aspects, another flexibility, a 3rd
overlooks performance, another overlooks memory footprint, ...).

the solutions which do best often tend to have the best sets of
tradeoffs and/or being well suited to a various niche, albeit typically
not being ideal in any single area.
If I've learned anything from trying to reimplement Self in JavaScript
it is that JavaScript is immensely fungible. I have in a few lines of
code working variants of lisp and forth and in a few more lines will
finish Self. Since I've written x86 assemblers in JS, I'm certain I
could turn any of these into a self hosting environment in under 1kloc.
dunno, it depends.

my guess for a JS->x86 machine-code compiler which can compile itself
and does eval/... would probably be at least around 10-25kloc.

granted, I am not necessarily the master of compact code.

my VM is presently around 450 kloc of C.
this seems fairly typical (compared with JaeggerMonkey and V8).

much more of the complexity goes into "infrastructure" though,
especially the FFI and dynamic typesystem machinery (numeric tower, OO
facilities, ...), which probably make up the bulk of the VM in its
present form.

the part for parsing/interpreting the HLL is actually fairly small part
of the whole.

actually, my native codegen seems to follow a similar pattern, with much
of the "bulk" of the code being related to things like the numeric
tower, type-conversions, endless special cases for moving a value from
location A to B (load and store operations, ...), ...

as well as things like having to gloss over the native ABIs (some ABIs,
such as the SysV/AMD64 ABI, are fairly painful to work with...).

using an abstract stack machine, it essentially means having to partly
virtualize the stack, mapping it internally to temporary variables and
registers, and utilize "magic" to get everything into the correct
registers and onto the correct place on the stack at the point the call
is made, ...
Evolving JS is a pragmatic approach to evolving the ecosystem. And
while technically nothing prevents people from building entire
operating systems in JavaScript, culturally we are adverse to it. (and
since Linux boots in a js x86 emulator I don't buy any argument
against the technical aspect)
I figure there are a few problem areas...

one would have to implement the typesystem/GC/... in JS, which would
mean needing capabilities a bit more like C at this point:
ability to write constant-memory code (and a defined machine-level
memory model);
pointers, pointer arithmetic, and pointer<->integer conversions;
...

that or:
the ability to interface directly with ASM code and/or inline assembler;
writing much of the GC and dynamic typesystem machinery in ASM.


much of the rest could probably be fairly clean though (apart from
"magic" needed to interface with the CPU or various pieces of hardware).

all of this though would likely mostly mean some language extensions,
and a well-defined ABI at this level.

some of my own stuff is built using partly hacked versions of the C
ABI's, as well as name-mangling (my notation partly derived as a mix of
the JVM/JNI notation and the IA64 C++ ABI).


partly, I would partly prefer a system where JS <-> C calls are fairly
transparent (much like C <-> C++ calls), this way, each language can be
used for what it is good at, without so much of an expectation to commit
to one or the other, and without big piles of nasty boilerplate needed
to make the interfacing work.

granted, a little fudging is needed at the borders, as direct idiomatic
C to idiomatic JS is unlikely to work entirely smoothly even if the FFI
worked "perfectly" (and in the implementation, there ends up being a bit
of typesystem overlap, where a VM may need to deal, dynamically, with a
good portion of the native C typesystem).

direct (full) C++ <-> JS interfacing would likely be a bit more complex,
especially WRT OOP (make raw C++ class be directly usable in JS, and
find some way to make JS object look to C++ like a plain class-based
object).

the above is a problem I am currently ignoring in my effort.

in my case I figure can probably just use some operator overloading on
the dynamic-references, and call it "good enough" (otherwise, C++ has to
use C-based interfacing, and I can call it "good enough").
The greatest danger we face is cultural tendency to rarify
programming, math, and computer science. For a simple system to be
widely adopted, it must conform to existing cultural constraints to
seem familiar, but only so much.
_('tk')
('does:','text:', 's | HTML("element:","span")("contains:", s, 0, s.length)')
('does:','image:', 'u | HTML("element:","img")("src:",u)')
('does:','sound:', 'u | HTML("element:","audio")("src:",u)')
('does:','video:', 'u | HTML("element:","video")("src:",u)')
('does:','box:', '| HTML("element:","div")')
Is straight JavaScript. But it looks a little like lisp, a little like smalltalk, and borrowed from self and forth.
I bet you can guess what it does by reading it. How it does it though would only be obvious to a JS expert. But the GUI that makes use of it allows a child to build web pages procedurally.
it required a bit of looking, but I guess it probably involves use of
closures or similar (each being a call which returns a closure which is
then applied).

some may depend some on whatever '_' is defined as.

in my own practice, I have often used '_' as a no-op placeholder though
(dummy arguments, unnamed variables, ...), which could potentially
conflict with using it for something like the above.
I doubt 10% of programmers will ever be able to grasp the concept of building grammars to program. But with the right useful abstractions, the right geometry of objects, they won't have to. We are still a long way away from having a circularly linked directed graph as a fundamental datatype, or an n-dimensional mapping operator that sends messages to past states of programs and returns what the results would have been. Hell people still type linear text in vi.
I use vi sometimes as well, but generally prefer Notepad-style graphical
editors (often gedit when using Linux, currently Notepad2 on Windows).

I guess I could use SciTE if I wanted to use the same editor on both
OS's (both Notepad2 and SciTE are built on Scintilla).

I generally prefer plain editors to the likes of IDE's like Visual
Studio or Eclipse.
David Barbour
2011-07-27 20:52:23 UTC
Permalink
Post by BGB
I think "fitness" and "merit" are some often misunderstood ideas.
People understand just fine that a solution of technical merit can fail due
to market forces, positioning, and fear of change. But they don't need to
like it.
Post by BGB
the solutions which do best often tend to have the best sets of tradeoffs
and/or being well suited to a various niche, albeit typically not being
ideal in any single area.
Be careful! Correlation isn't causation. It is true that solutions that
succeed tend to have 'good' tradeoffs and/or are well suited to a niche. *But
so are a lot of solutions that fail.* Indeed, the failures often have even
better tradeoffs, because they are built with hindsight.

Inertia and circumstance easily trample a 'better' solution, especially for
a technology such as a programming language that grows a whole ecosystem
around it (IDEs, optimizers, integration with applications, et cetera).

*Whether you intend it or not*, saying that the popular solution is the
'best' one without even researching what has been attempted seems ignorant,
insulting, and prejudicial. Chances are, even if BGBScript is an improvement
on JavaScript in every way you or I can imagine, it will fail. How would you
feel if someone, who has never even looked up your language, were to say:
"JavaScript succeeds because it makes the best tradeoffs."
BGB
2011-07-27 21:38:26 UTC
Permalink
Post by BGB
I think "fitness" and "merit" are some often misunderstood ideas.
People understand just fine that a solution of technical merit can
fail due to market forces, positioning, and fear of change. But they
don't need to like it.
the solutions which do best often tend to have the best sets of
tradeoffs and/or being well suited to a various niche, albeit
typically not being ideal in any single area.
Be careful! Correlation isn't causation. It is true that solutions
that succeed tend to have 'good' tradeoffs and/or are well suited to a
niche. *But so are a lot of solutions that fail.* Indeed, the failures
often have even better tradeoffs, because they are built with hindsight.
note that my definition of "fitness" also includes marketing forces and
economics.
for example, something can have be more "fit" because it has lots of
money invested into its marketing effort, ...

resistance to change is also an important factor in fitness, and any
ideal solution would also need to include this, as well as other
factors, such as name-recognition, or legal factors such as trademarks
and patent issues, ...
Post by BGB
Inertia and circumstance easily trample a 'better' solution,
especially for a technology such as a programming language that grows
a whole ecosystem around it (IDEs, optimizers, integration with
applications, et cetera).
/Whether you intend it or not/, saying that the popular solution is
the 'best' one without even researching what has been attempted seems
ignorant, insulting, and prejudicial. Chances are, even if BGBScript
is an improvement on JavaScript in every way you or I can imagine, it
will fail. How would you feel if someone, who has never even looked up
your language, were to say: "JavaScript succeeds because it makes the
best tradeoffs."
in general, it shouldn't matter, since for the most part, BGBScript is
mostly a JavaScript superset (much like C++ is a C superset), and so
code written in a JS like subset will work in either one, and they are
not competing for the same domain anyways (BS being intended mostly for
app scripting, rather than web-servers or clients).

granted, this doesn't cover the case where the code depends on some of
my extensions...

otherwise, whether or not my language is adopted by other people is
really not of much significance.


many of the core extensions (apart from the FFI), were borrowed from
ActionScript 3, ...

however, with any luck, the ECMAScript Harmony working group will settle
on a design similar to my own (and hopefully not wildly incompatible),
in which case it will be more convenient for myself.


now, it is probably an issue if one considers ECMAScript, JavaScript,
ActionScript, ... to be distinct and competing languages.


or such...
David Barbour
2011-07-28 00:47:58 UTC
Permalink
Post by BGB
note that my definition of "fitness" also includes marketing forces and
economics.
for example, something can have be more "fit" because it has lots of money
invested into its marketing effort, ...
I objected specifically to your mentions of '*merit*' and '*best sets of
tradeoffs and/or being well suited to a various niche*', not fitness. I
grant that 'fitness' is determined by the environment, but that has nothing
to do with my objection.
BGB
2011-07-28 02:44:27 UTC
Permalink
Post by BGB
note that my definition of "fitness" also includes marketing
forces and economics.
for example, something can have be more "fit" because it has lots
of money invested into its marketing effort, ...
I objected specifically to your mentions of '/merit/' and '/best sets
of tradeoffs and/or being well suited to a various niche/', not
fitness. I grant that 'fitness' is determined by the environment, but
that has nothing to do with my objection.
merit and fitness seem equivalent IMO, and the "best set of tradeoffs"
may happen to include things like marketing, economics, ...

John Nilsson
2011-07-26 13:43:55 UTC
Permalink
Post by BGB
the main merit of a bytecode format is that it could shorten the path in
getting to native code, potentially allowing it to be faster.
It seems to me that there is a lot of derivation of information going on
when interpreting source code. First a one dimensional stream of characters
is transformed into some kind of structured representation, possibly in
David Barbour
2011-07-26 16:22:12 UTC
Permalink
In other words a _lot_ of CPU cycles are spend on deriving the same
information, again and again, each time a program is loaded. Not only is
this a waste of energy it also means that each interpreter of the program
needs to be able to derive all this information on their own, which leads to
very complex programs (expensive to develop).
The right answer here: use a cache. I.e. treat compiled code (or, at least,
pre-parsed to AST and annotated code) as a cached optimization of source.
This does require we design our language and distribution protocol for
effective hashing and caching.
Would it not be a big improvement if we could move from representing
programs as text-strings into representing them in some format capable of
representing all this derived information? Does any one know of attempts in
this direction?
There have been a lot of efforts on proof-carrying code and the like, that
you could look into.

But I do not believe that this makes a good distribution format. The basic
issue is: the desired annotations are typically 'private' to an
implementation of a compiler or interpreter. Attempts to standardize the set
of derived information would always miss something. The optimizations one
can perform are often contextual (especially for space optimizations - e.g.
can you replace this block with code that's already installed?) but
information about context doesn't transfer well.

I do not believe we'd get a 'big improvement' from these efforts. Consider
these alternatives:
* modularity and caching
* obtain source through a trusted 'intermediate' service that can proxy,
cache, and compile on your behalf. (This is how I envision embedded systems
working with source - ship the URI off to a trusted compiler in the cloud,
and use the signed feedback).
John Zabroski
2011-07-27 00:57:37 UTC
Permalink
Post by David Barbour
In other words a _lot_ of CPU cycles are spend on deriving the same
information, again and again, each time a program is loaded. Not only is
this a waste of energy it also means that each interpreter of the program
needs to be able to derive all this information on their own, which leads to
very complex programs (expensive to develop).
The right answer here: use a cache. I.e. treat compiled code (or, at least,
pre-parsed to AST and annotated code) as a cached optimization of source.
This does require we design our language and distribution protocol for
effective hashing and caching.
Would it not be a big improvement if we could move from representing
programs as text-strings into representing them in some format capable of
representing all this derived information? Does any one know of attempts in
this direction?
There have been a lot of efforts on proof-carrying code and the like, that
you could look into.
But I do not believe that this makes a good distribution format. The basic
issue is: the desired annotations are typically 'private' to an
implementation of a compiler or interpreter. Attempts to standardize the set
of derived information would always miss something. The optimizations one
can perform are often contextual (especially for space optimizations - e.g.
can you replace this block with code that's already installed?) but
information about context doesn't transfer well.
I do not believe we'd get a 'big improvement' from these efforts. Consider
* modularity and caching
* obtain source through a trusted 'intermediate' service that can proxy,
cache, and compile on your behalf. (This is how I envision embedded systems
working with source - ship the URI off to a trusted compiler in the cloud,
and use the signed feedback).
This is basically right, of course. Didn't we have a conversation about
this on Lambda the Ultimate a year or so ago? It may've been in the context
of discussing Wikis as IDEs. ;-) Or it could have been the thread about how
just-in-time compilation works. The basic idea is that you have an
"auditing machine" transcribe dynamic optimizations. This basically saves
the dynamic profile of your code and shares it on the network as a resource
for compilers-as-a-service to use to produce better binaries.



To John,

Content-based networking can also be used in the way David is suggesting to
distribute compiler optimizations via content-delivery networks. The major
feature that needs to work correctly is a key ring. This isn't very
different from having secure package management and verification of package
contents before pushing a Redhat or Debian package out to all computers on
your network. It's not a hard problem. It just isn't done right now
because that is not how most people are trained to think about computer
systems.
BGB
2011-07-26 21:17:38 UTC
Permalink
Post by BGB
the main merit of a bytecode format is that it could shorten the
path in getting to native code, potentially allowing it to be faster.
It seems to me that there is a lot of derivation of information going
on when interpreting source code. First a one dimensional stream of
characters is transformed into some kind of structured representation,
Casey Ransberger
2011-07-27 00:41:38 UTC
Permalink
I doubt this is what you're thinking -- not sure I read this clearly -- but I caught an interview with John McCarthy on the 'tubes wherein he seemed to indicate that he was interested in a universal intermediate representation.

I thought the idea was cool.
the main merit of a bytecode format is that it could shorten the path in getting to native code, potentially allowing it to be faster.
It seems to me that there is a lot of derivation of information going on when interpreting source code. First a one dimensional stream of characters is transformed into some kind of structured representation, possibly in several steps. From the structural representation a lot of inference about the program happens to deduce types and other properties of the structure. Once inside a VM even more information is gathered such as determining if call sites are typically monomorphic or not, and so on.
In other words a _lot_ of CPU cycles are spend on deriving the same information, again and again, each time a program is loaded. Not only is this a waste of energy it also means that each interpreter of the program needs to be able to derive all this information on their own, which leads to very complex programs (expensive to develop).
Would it not be a big improvement if we could move from representing programs as text-strings into representing them in some format capable of representing all this derived information? Does any one know of attempts in this direction?
BR,
John
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Julian Leviston
2011-07-25 23:59:42 UTC
Permalink
Do you remember those battles between behemoths trying to place proprietary technologies in our browsers? I do. 'Embrace and extend' was a strategy discussed and understood even in grade school. I'm a bit curious whether Google will be facing an EOLAS patent suit for NaCl, or whether that privilege will go to whomever uses NaCl and WebSockets to connect browsers together.
Yeah, I think the truly smart man releases something crappy FIRST before anyone else gets out of the gate, but has a plan to improve it, so the "crappy" thing actually is very smart because it doesn't "hem in" the future. (ie the design decisions made in it allow the step by step shifting to wherever the vision is directed). So long as there's a vision... And by crappy, really I mean basic as in... doesn't do much... or put another way "does just enough" to get the job done.

"Release early, release often".

Julian.
Alan Kay
2011-07-26 03:21:00 UTC
Permalink
Again good points.

Java itself could have been fixed if it were not for the Sun marketing people
who rushed "the electronic toaster language" out where it was not fit to go. Sun
was filled with computerists who knew what they were doing, but it was quickly
too late.

And you are right about Mark Miller.

My complaint is not about JS per se, but about whether it is possible to get all
the cycles the computer has for certain purposes. One of the main unnecessary
violations of the spirit of computing in the web is that it wasn't set up to
allow safe access to the whole machine -- despite this being the goal of good OS
design since the mid-60s.

Cheers,

Alan




________________________________
From: David Barbour <***@gmail.com>
To: Fundamentals of New Computing <***@vpri.org>
Sent: Mon, July 25, 2011 12:59:16 PM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam


On Mon, Jul 25, 2011 at 9:25 AM, Igor Stasenko <***@gmail.com> wrote:

how different our systems would be, if guys who started it 20 years back would
think a bit about future?

The guys who spend their time thinking about it lose, just as they always do.
Worse is better wins on the market. Brendan Eich was right to fear something
even worse than his rapidly hacked brainstorm child - i.e. if it were not
JavaScript/EcmaScript, we might be using proprietary VBScript from Microsoft.

Do you remember those battles between behemoths trying to place proprietary
technologies in our browsers? I do. 'Embrace and extend' was a strategy
discussed and understood even in grade school. I'm a bit curious whether Google
will be facing an EOLAS patent suit for NaCl, or whether that privilege will go
to whomever uses NaCl and WebSockets to connect browsers together.

It is interesting to see JS evolve in non-backwards-compatible ways to help
eliminate some of the poisons of its original design - eliminating the global
namespace, dropping callee/caller/arguments, development of a true module system
that prevents name shadowing and allows effective caching, and so on. Mark
Miller, who has performed significant work on object capability security, has
also started to shape JavaScript to make it into a moderately sane programming
language... something that could be used as a more effective compilation target
for other languages.
Igor Stasenko
2011-07-26 12:34:25 UTC
Permalink
Post by Alan Kay
Again good points.
Java itself could have been fixed if it were not for the Sun marketing
people who rushed "the electronic toaster language" out where it was not fit
to go. Sun was filled with computerists who knew what they were doing, but
it was quickly too late.
And you are right about Mark Miller.
My complaint is not about JS per se, but about whether it is possible to get
all the cycles the computer has for certain purposes. One of the main
unnecessary violations of the spirit of computing in the web is that it
wasn't set up to allow safe access to the whole machine -- despite this
being the goal of good OS design since the mid-60s.
Indeed. And the only lucky players in the field who can access raw
machine power is plugins like Flash.
And only because they gained enough trust as being "well safe".
As for the rest of developers (if they are not using existing
mechanisms in browser) the computer's resources still closed behind
solid fence.

Another interesting fact that while we having a hardware which can do
virtualization (not saying about software),
the only application of it which adopted widely is to run one
operating system inside another, host one.

But hey, things could be much more lightweight!
For instance , look at SqueakNOS project. Its boot time is like 10-15
seconds (and most of it not belongs to SqueakNOS itself but to bios
and boot loader).

So, it remains a mystery to me, why we cannot use web+virtualization.
It seems like a good balance between accessing raw machine power and
being safe at the same time.

I hope that NaCl partly using it , but then i wonder why they spending
an effort to validate native code, because if something vent wrong,
you can just kill it or freeze it (or do anything which virtualization
allow you to do),
without any chance of putting host system in danger.
Post by Alan Kay
Cheers,
Alan
--
Best regards,
Igor Stasenko AKA sig.
BGB
2011-07-26 21:04:48 UTC
Permalink
Post by Igor Stasenko
Post by Alan Kay
Again good points.
Java itself could have been fixed if it were not for the Sun marketing
people who rushed "the electronic toaster language" out where it was not fit
to go. Sun was filled with computerists who knew what they were doing, but
it was quickly too late.
And you are right about Mark Miller.
My complaint is not about JS per se, but about whether it is possible to get
all the cycles the computer has for certain purposes. One of the main
unnecessary violations of the spirit of computing in the web is that it
wasn't set up to allow safe access to the whole machine -- despite this
being the goal of good OS design since the mid-60s.
Indeed. And the only lucky players in the field who can access raw
machine power is plugins like Flash.
And only because they gained enough trust as being "well safe".
As for the rest of developers (if they are not using existing
mechanisms in browser) the computer's resources still closed behind
solid fence.
Another interesting fact that while we having a hardware which can do
virtualization (not saying about software),
the only application of it which adopted widely is to run one
operating system inside another, host one.
But hey, things could be much more lightweight!
For instance , look at SqueakNOS project. Its boot time is like 10-15
seconds (and most of it not belongs to SqueakNOS itself but to bios
and boot loader).
So, it remains a mystery to me, why we cannot use web+virtualization.
It seems like a good balance between accessing raw machine power and
being safe at the same time.
I hope that NaCl partly using it , but then i wonder why they spending
an effort to validate native code, because if something vent wrong,
you can just kill it or freeze it (or do anything which virtualization
allow you to do),
without any chance of putting host system in danger.
because NaCl was not built on virtualization... from what I heard it was
built on using segmented memory.

the problem then is that unverified code could potentially far-pointer
its way right out of the sandbox.
Igor Stasenko
2011-07-27 01:04:15 UTC
Permalink
Post by BGB
Post by Igor Stasenko
Post by Alan Kay
Again good points.
Java itself could have been fixed if it were not for the Sun marketing
people who rushed "the electronic toaster language" out where it was not fit
to go. Sun was filled with computerists who knew what they were doing, but
it was quickly too late.
And you are right about Mark Miller.
My complaint is not about JS per se, but about whether it is possible to get
all the cycles the computer has for certain purposes. One of the main
unnecessary violations of the spirit of computing in the web is that it
wasn't set up to allow safe access to the whole machine -- despite this
being the goal of good OS design since the mid-60s.
Indeed. And the only lucky players in the field who can access raw
machine power is plugins like Flash.
And only because they gained enough trust as being "well safe".
As for the rest of developers (if they are not using existing
mechanisms in browser) the computer's resources still closed behind
solid fence.
Another interesting fact that while we having a hardware which can do
virtualization (not saying about software),
the only application of it which adopted widely is to run one
operating system inside another, host one.
But hey, things could be much more lightweight!
For instance , look at SqueakNOS project. Its boot time is like 10-15
seconds (and most of it not belongs to SqueakNOS itself but to bios
and boot loader).
So, it remains a mystery to me, why we cannot use web+virtualization.
It seems like a good balance between accessing raw machine power and
being safe at the same time.
I hope that NaCl partly using it , but then i wonder why they spending
an effort to validate native code, because if something vent wrong,
you can just kill it or freeze it (or do anything which virtualization
allow you to do),
without any chance of putting host system in danger.
because NaCl was not built on virtualization... from what I heard it was
built on using segmented memory.
the problem then is that unverified code could potentially far-pointer its
way right out of the sandbox.
Hmm. As they mentioning on this page:
------
http://code.google.com/games/technology-nacl.html

Native Client is a sandboxing system. It runs code in a virtual
environment where all OS calls are intercepted by the NaCl runtime.
This has two benefits. First, it enhances security by preventing
untrusted code from making dangerous use of the operating system.
Second, because OS calls are virtualized, NaCl code is OS-independent.
You can run the same binary executable on MacOS, Linux, and Windows.

But syscall virtualization by itself wouldn't be as secure as
Javascript, because clever hackers can always find ways to exit the
sandbox. NaCl's real contribution is a software verification system
that scans each executable module before it runs. The verifier imposes
a set of constraints on the program that prevent the code from exiting
the sandbox. This security comes at a relatively small performance
price, with NaCl code generally running at about 95% the speed of
equivalent compiled code.
------

so, it leaves me clueless, how it can escape the sandbox if you are
intercepting all system calls.
And what level of safety will give you a static analyzis, if your NaCl
could be a Virtual Machine with JIT - so potentially it could generate
and run arbitrary native code. Or can download the code from internet
and then execute it.
Post by BGB
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
Best regards,
Igor Stasenko AKA sig.
John Zabroski
2011-07-27 01:23:54 UTC
Permalink
Post by Alan Kay
Post by BGB
Post by Igor Stasenko
Post by Alan Kay
Again good points.
Java itself could have been fixed if it were not for the Sun marketing
people who rushed "the electronic toaster language" out where it was
not
Post by BGB
Post by Igor Stasenko
Post by Alan Kay
fit
to go. Sun was filled with computerists who knew what they were doing, but
it was quickly too late.
And you are right about Mark Miller.
My complaint is not about JS per se, but about whether it is possible
to
Post by BGB
Post by Igor Stasenko
Post by Alan Kay
get
all the cycles the computer has for certain purposes. One of the main
unnecessary violations of the spirit of computing in the web is that it
wasn't set up to allow safe access to the whole machine -- despite this
being the goal of good OS design since the mid-60s.
Indeed. And the only lucky players in the field who can access raw
machine power is plugins like Flash.
And only because they gained enough trust as being "well safe".
As for the rest of developers (if they are not using existing
mechanisms in browser) the computer's resources still closed behind
solid fence.
Another interesting fact that while we having a hardware which can do
virtualization (not saying about software),
the only application of it which adopted widely is to run one
operating system inside another, host one.
But hey, things could be much more lightweight!
For instance , look at SqueakNOS project. Its boot time is like 10-15
seconds (and most of it not belongs to SqueakNOS itself but to bios
and boot loader).
So, it remains a mystery to me, why we cannot use web+virtualization.
It seems like a good balance between accessing raw machine power and
being safe at the same time.
I hope that NaCl partly using it , but then i wonder why they spending
an effort to validate native code, because if something vent wrong,
you can just kill it or freeze it (or do anything which virtualization
allow you to do),
without any chance of putting host system in danger.
because NaCl was not built on virtualization... from what I heard it was
built on using segmented memory.
the problem then is that unverified code could potentially far-pointer
its
Post by BGB
way right out of the sandbox.
------
http://code.google.com/games/technology-nacl.html
Native Client is a sandboxing system. It runs code in a virtual
environment where all OS calls are intercepted by the NaCl runtime.
This has two benefits. First, it enhances security by preventing
untrusted code from making dangerous use of the operating system.
Second, because OS calls are virtualized, NaCl code is OS-independent.
You can run the same binary executable on MacOS, Linux, and Windows.
But syscall virtualization by itself wouldn't be as secure as
Javascript, because clever hackers can always find ways to exit the
sandbox. NaCl's real contribution is a software verification system
that scans each executable module before it runs. The verifier imposes
a set of constraints on the program that prevent the code from exiting
the sandbox. This security comes at a relatively small performance
price, with NaCl code generally running at about 95% the speed of
equivalent compiled code.
------
so, it leaves me clueless, how it can escape the sandbox if you are
intercepting all system calls.
And what level of safety will give you a static analyzis, if your NaCl
could be a Virtual Machine with JIT - so potentially it could generate
and run arbitrary native code. Or can download the code from internet
and then execute it.
Try [1] instead of a marketing summary. What they do is severely constrain
the x86 execution model and limit interaction with OS interfaces.

Is it 100% safe? Probably not.

[1] http://www.chromium.org/nativeclient/reference/research-papers
Jakob Praher
2011-07-25 17:12:43 UTC
Permalink
Dear Alan,
Dear List,

the following very recent announcement might be of interest to this
discussion:
http://groups.google.com/group/mozilla.dev.platform/browse_thread/thread/7668a9d46a43e482

To quote Andreas et al.:

"Mozilla believes that the web can displace proprietary,
single-vendor stacks for application development. To make open web
technologies a better basis for future applications on mobile and
desktop alike, we need to keep pushing the envelope of the web to
include --- and in places exceed --- the capabilities of the
competing stacks in question. "

Though there is not much there yet (just a kind of manifesto and a
readme file on github) https://github.com/andreasgal/B2G, I think this
is a encouragning development, as the web becomes more and more a walled
garden of giants, I think we desperately need to have open APIs. Strong
open client APIs hopefully bring more power to individuals. What do you
think?

Cheers,
-- Jakob
Post by Alan Kay
Hi Marcel
I think I've already said a bit about the Web on this list -- mostly
about the complete misunderstanding of the situation the web and
browser designers had.
All the systems principles needed for a good design were already
extant, but I don't think they were known to the designers, even
though many of them were embedded in the actual computers and
operating systems they used.
The simplest way to see what I'm talking about is to notice the
many-many things that could be done on a personal computer/workstation
that couldn't be done in the web & browser running on the very same
personal computer/workstation. There was never any good reason for
these differences.
Another way to look at this is from the point of view of "separation
of concerns". A big question in any system is "how much does 'Part A'
have to know about 'Part B' (and vice versa) in order to make things
happen?" The web and browser designs fail on this really badly, and
have forced set after set of weak conventions into larger and larger,
but still weak browsers and, worse, onto zillions of web pages on the
net.
Basically, one of the main parts of good systems design is to try to
find ways to finesse safe actions without having to know much. So --
for example -- Squeak runs everywhere because it can carry all of its
own resources with it, and the OS processes/address-spaces allow it to
run safely, but do not have to know anything about Squeak to run it.
Similarly Squeak does not have to know much to run on every machine -
just how to get events, a display buffer, and to map its file
conventions onto the local ones. On a bare machine, Squeak *is* the
OS, etc. So much for old ideas from the 70s!
The main idea here is that a windowing 2.5 D UI can compose views from
many sources into a "page". The sources can be opaque because they can
even do their own rendering if needed. Since the sources can run in
protected address-spaces their actions can be confined, and "we" the
mini-OS running all this do not have to know anything about them. This
is how apps work on personal computers, and there is no reason why
things shouldn't work this way when the address-spaces come from other
parts of the net. There would then be no difference between "local"
and "global" apps.
Since parts of the address spaces can be externalized, indexing as
rich (and richer) to what we have now still can be done.
And so forth.
The Native Client part of Chrome finally allows what should have been
done in the first place (we are now about 20+ years after the first
web proposals by Berners-Lee). However, this approach will need to be
adopted by most of the already existing multiple browsers before it
can really be used in a practical way in the world of personal
computing -- and there are signs that there is not a lot of agreement
or understanding why this would be a good thing.
The sad and odd thing is that so many people in the computer field
were so lacking in "systems consciousness" that they couldn't see
this, and failed to complain mightily as the web was being set up and
a really painful genii was being let out of the bottle.
As Kurt Vonnegut used to say "And so it goes".
Cheers,
Alan
------------------------------------------------------------------------
*Sent:* Sun, July 24, 2011 5:39:26 AM
*Subject:* Re: [fonc] Alan Kay talk at HPI in Potsdam
Hi Alan,
as usual, it was inspiring talking to your colleagues and hearing you
speak at Potsdam. I think I finally got the Model-T image, which
resonated with my fondness for Objective-C: a language that a 17 year
old with no experience with compilers or runtimes can implement and
that manages to boil down dynamic OO/messaging to a single special
function can't be all bad :-)
There was one question I had on the scaling issue that would not have
fitted in the Q&A: while praising the design of the Internet, you
spoke less well of the World Wide Web, which surprised me a bit. Can
you elaborate?
Thanks,
Marcel
Post by Alan Kay
To All,
This wound up being a talk to several hundred students, so most of
the content is about "ways to think about things", with just a little
about scaling and STEPS at the end.
Cheers,
Alan
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Jakob Praher
2011-07-25 17:13:08 UTC
Permalink
Dear Alan,
Dear List,

the following very recent announcement might be of interest to this
discussion:
http://groups.google.com/group/mozilla.dev.platform/browse_thread/thread/7668a9d46a43e482

To quote Andreas et al.:

"Mozilla believes that the web can displace proprietary,
single-vendor stacks for application development. To make open web
technologies a better basis for future applications on mobile and
desktop alike, we need to keep pushing the envelope of the web to
include --- and in places exceed --- the capabilities of the
competing stacks in question. "

Though there is not much there yet (just a kind of manifesto and a
readme file on github) https://github.com/andreasgal/B2G, I think this
is a encouragning development, as the web becomes more and more a walled
garden of giants, I think we desperately need to have open APIs. Strong
open client APIs hopefully bring more power to individuals. What do you
think?

Cheers,
-- Jakob
Post by Alan Kay
Hi Marcel
I think I've already said a bit about the Web on this list -- mostly
about the complete misunderstanding of the situation the web and
browser designers had.
All the systems principles needed for a good design were already
extant, but I don't think they were known to the designers, even
though many of them were embedded in the actual computers and
operating systems they used.
The simplest way to see what I'm talking about is to notice the
many-many things that could be done on a personal computer/workstation
that couldn't be done in the web & browser running on the very same
personal computer/workstation. There was never any good reason for
these differences.
Another way to look at this is from the point of view of "separation
of concerns". A big question in any system is "how much does 'Part A'
have to know about 'Part B' (and vice versa) in order to make things
happen?" The web and browser designs fail on this really badly, and
have forced set after set of weak conventions into larger and larger,
but still weak browsers and, worse, onto zillions of web pages on the
net.
Basically, one of the main parts of good systems design is to try to
find ways to finesse safe actions without having to know much. So --
for example -- Squeak runs everywhere because it can carry all of its
own resources with it, and the OS processes/address-spaces allow it to
run safely, but do not have to know anything about Squeak to run it.
Similarly Squeak does not have to know much to run on every machine -
just how to get events, a display buffer, and to map its file
conventions onto the local ones. On a bare machine, Squeak *is* the
OS, etc. So much for old ideas from the 70s!
The main idea here is that a windowing 2.5 D UI can compose views from
many sources into a "page". The sources can be opaque because they can
even do their own rendering if needed. Since the sources can run in
protected address-spaces their actions can be confined, and "we" the
mini-OS running all this do not have to know anything about them. This
is how apps work on personal computers, and there is no reason why
things shouldn't work this way when the address-spaces come from other
parts of the net. There would then be no difference between "local"
and "global" apps.
Since parts of the address spaces can be externalized, indexing as
rich (and richer) to what we have now still can be done.
And so forth.
The Native Client part of Chrome finally allows what should have been
done in the first place (we are now about 20+ years after the first
web proposals by Berners-Lee). However, this approach will need to be
adopted by most of the already existing multiple browsers before it
can really be used in a practical way in the world of personal
computing -- and there are signs that there is not a lot of agreement
or understanding why this would be a good thing.
The sad and odd thing is that so many people in the computer field
were so lacking in "systems consciousness" that they couldn't see
this, and failed to complain mightily as the web was being set up and
a really painful genii was being let out of the bottle.
As Kurt Vonnegut used to say "And so it goes".
Cheers,
Alan
------------------------------------------------------------------------
*Sent:* Sun, July 24, 2011 5:39:26 AM
*Subject:* Re: [fonc] Alan Kay talk at HPI in Potsdam
Hi Alan,
as usual, it was inspiring talking to your colleagues and hearing you
speak at Potsdam. I think I finally got the Model-T image, which
resonated with my fondness for Objective-C: a language that a 17 year
old with no experience with compilers or runtimes can implement and
that manages to boil down dynamic OO/messaging to a single special
function can't be all bad :-)
There was one question I had on the scaling issue that would not have
fitted in the Q&A: while praising the design of the Internet, you
spoke less well of the World Wide Web, which surprised me a bit. Can
you elaborate?
Thanks,
Marcel
Post by Alan Kay
To All,
This wound up being a talk to several hundred students, so most of
the content is about "ways to think about things", with just a little
about scaling and STEPS at the end.
Cheers,
Alan
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Bert Freudenberg
2011-07-25 19:35:33 UTC
Permalink
Post by Jakob Praher
Dear Alan,
Dear List,
the following very recent announcement might be of interest to this discussion: http://groups.google.com/group/mozilla.dev.platform/browse_thread/thread/7668a9d46a43e482
"Mozilla believes that the web can displace proprietary, single-vendor stacks for application development. To make open web technologies a better basis for future applications on mobile and desktop alike, we need to keep pushing the envelope of the web to include --- and in places exceed --- the capabilities of the competing stacks in question. "
Though there is not much there yet (just a kind of manifesto and a readme file on github) https://github.com/andreasgal/B2G, I think this is a encouragning development, as the web becomes more and more a walled garden of giants, I think we desperately need to have open APIs. Strong open client APIs hopefully bring more power to individuals. What do you think?
Cheers,
-- Jakob
I did ask in that thread about exposing the CPU, a la NativeClient. (It's a usenet group so you can post without subscribing, nice)

Short answer is that they don't see a need for it.

- Bert -
Jakob Praher
2011-07-25 20:59:03 UTC
Permalink
Post by Bert Freudenberg
I did ask in that thread about exposing the CPU, a la NativeClient. (It's a usenet group so you can post without subscribing, nice)
Short answer is that they don't see a need for it.
I somehow have mixed feelings about NaCL. I think that safe execution of
native code is a great achievement. Yet the current implementation
somehow still feels a bit like safer reincarnation of the ActiveX
technology. It defines a kind of abstract toolkit (like ActiveX used
WIN32 API) that enables you to interact with the user in a definite way
(graphics, audio, events).

I think it fails to achieve a common low level representation of data
that can be safely used to compose powerful applications.
Alan Kay
2011-07-26 03:23:15 UTC
Permalink
I agree there are better ways to do things than NaCl, but Yoshiki was able to
get Squeak running in it, and that was a milestone benchmark that points the way
for better systems than Squeak.

Cheers,

Alan




________________________________
From: Jakob Praher <***@hapra.at>
To: ***@vpri.org
Sent: Mon, July 25, 2011 1:59:03 PM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam
Post by Bert Freudenberg
I did ask in that thread about exposing the CPU, a la NativeClient. (It's a
usenet group so you can post without subscribing, nice)
Short answer is that they don't see a need for it.
I somehow have mixed feelings about NaCL. I think that safe execution of
native code is a great achievement. Yet the current implementation
somehow still feels a bit like safer reincarnation of the ActiveX
technology. It defines a kind of abstract toolkit (like ActiveX used
WIN32 API) that enables you to interact with the user in a definite way
(graphics, audio, events).

I think it fails to achieve a common low level representation of data
Alan Kay
2011-07-26 03:15:47 UTC
Permalink
I think this is the big problem -- the various "theys" over the years "don't see
the need for it"




________________________________
From: Bert Freudenberg <***@freudenbergs.de>
To: Fundamentals of New Computing <***@vpri.org>
Sent: Mon, July 25, 2011 12:35:33 PM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam
Post by Jakob Praher
Dear Alan,
Dear List,
http://groups.google.com/group/mozilla.dev.platform/browse_thread/thread/7668a9d46a43e482
"Mozilla believes that the web can displace proprietary, single-vendor stacks
for application development. To make open web technologies a better basis for
future applications on mobile and desktop alike, we need to keep pushing the
envelope of the web to include --- and in places exceed --- the capabilities of
the competing stacks in question. "
Though there is not much there yet (just a kind of manifesto and a readme file
on github) https://github.com/andreasgal/B2G, I think this is a encouragning
development, as the web becomes more and more a walled garden of giants, I think
we desperately need to have open APIs. Strong open client APIs hopefully bring
more power to individuals. What do you think?
Cheers,
-- Jakob
I did ask in that thread about exposing the CPU, a la NativeClient. (It's a
usenet group so you can post without subscribing, nice)

Short answer is that they don't see a need for it.

- Bert -
Marcel Weiher
2011-07-25 17:49:26 UTC
Permalink
Hi Alan,

thanks for elaborating. I guess I had seen some of those criticisms before but not connected them with actual design flaws.

While there is certainly room for criticism, it seems hard to dispute that the WWW is a remarkably successful artifact, a vast and unprecedented global distributed computing and document engine that actually works at a scale of 1,000,000,000 : 1. Just in terms of scaling, that seems pretty good to me and worth of study. And it manages to work where previous efforts by Really Smart People™ utterly failed, which is also interesting. Last not least, the fact that it contradicts our theories, which say that it is put together "wrong", makes it not less but more interesting, at least to the scientist/empricist in me.


More specifically, some of your criticisms seem a bit unfair to me. For example, TBL was trying to build a document distribution system for scientists, not the ultimate distributed computing platform. That part happened sort of by accident. While a document is a special case of an app, the fact is that this view does make a lot of things much more complicated (I am currently adopting that approach for tablet educational content), and limiting the design in the way he did seems entirely appropriate for the desired goal. Being based on the NeXT text system, the original WWW app also naturally included authoring as well as viewing. The fact that later browsers omitted this feature can't really be laid on the doorstep of the original design, and in fact seems to be more a reflection of a human condition than a technical one: our wishes notwithstanding, people consume much more than they author. Even in forums where "authoring" is as trivial as typing text and hitting return the ratio is in the range of 10:1 to 100:1 in favor of consumption. And even those consumption/creation ratios typically have an unfavorable signal-to-noise ratio. Creation being harder and rarer than consumption is not (no longer?) primarily a technical problem.

In terms of dynamic content in a browser, we already have tons of Web 2.0 apps, Fabrice Bellard has shown that we can run Linux on JavaScript ( http://bellard.org/jslinux/index.html ), and another site runs Win XP in a Java sandbox ( http://jpc2.com/ ). Google Native Client may give us a bit of a performance boost, but I don't see it bringing anything fundamentally new to the table that will drastically change the overall situation.


I am also not sure wether taking a single-address-space model of computing and extending it to internet-scale is the right direction to take. Apart from the well-documented pitfalls of this approach, one of the big lessons I *thought* I had learned from you (i.e. your writings) was to take something large (computers in a network exchanging messages) and scale it down (objects and messages in a single computer), rather than take something small (CPU, instructions) and attempt to scale it up (ADTs, RPCs, …). Of course, it is likely that I misunderstood. Before the WWW, we really didn't have a large (global-scale) distributed system to scale down, just our ideas and analogies (cells from biology being one example) of what such a system might look like. Well, now that we actually have a 1E9 : 1 scale system to look at and scale down, and it turns out that it looks a bit different than we thought it would. This seems like a good thing to me (see above), because it means we have an opportunity for learning.

So maybe it is true that there should be no difference between our "local" and our "global" apps, but instead of making our global apps look like our local ones, a more fruitful approach could be to make our local apps look like our global ones.

There are a bunch of features that appear worthwhile to me, the first one being the fact that we can hide computation behind a "static document" interface. So when I type in a URI: http://www.vpri.org/ the interface I am using treats the resource as just a thing that I have referenced using a single name. Underneath, a lot of messages are exchanged in various forms to make this happen, but this is hidden. Furthermore, the endpoint that the name is resolved to may be a static resource or a program that generates the resource dynamically, I have no way of finding out. That's pretty decoupled! If you believe that the "Rule of Least Expressiveness" stated by Roy/Haridi ("When programming a component, the right computation model for the component is the least expressive model that results in a natural program") is a good thing, which I do, then this is a powerful feature.

It also has many powerful and serendipitous consequences. For example, when I helped build a CMS in a largely RESTful style (we weren't aware of the term or the architecture, just like we weren't aware of eXtreme Programming, we just thought it was a good idea to build it that way), not only were we almost totally resilient against crashes (system updates between mouse-clicks!), our users were able to configure their UI themselves by saving bookmarks to parts of the program they needed to access frequently. Documentation could easily become active by embedding links to the live-system right in the help-files describing the functionality. With the adoption of a "cooler" dynamic JavaScript / Web 2.0 interface that is more like a traditional app (Squeak or otherwise), these capabilities were lost. I personally find dynamic sites (dynamic on the client, so JavaScript, Flash, Java) less usable/useful than static ones.


Back to useful features of the REST model, having the "what can I do next" information embedded in the answer to my request ("Hypertext as the carrier of application state") also seems to be a powerful way of really, really, really late binding APIs. Pushing content negotiation into the infrastructure makes things less brittle by allowing multiple users to have different views of the same resource without having to pollute the model. Clearly separating simple/idempotent GET and PUT requests from rarer/more complex POSTs not only enables caching and scalability on a global scale, but also seems like a good way of separating basic CRUD tasks, which just won't go away, from the semantically richer, intensional messages that should be at the heart of good OO design. No more accessor messages, let the URIs take care of that and make the messaging interface intensional :)


Anyway: while thinking and working on software architecture and what it might mean for the next steps in programming (http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/024942.html) the web just sort of happened, and the fact that most of the components are cobbled together in Perl ( http://xkcd.com/224/ ) and thus rather ungainly caused me to fail to realize that the way the pieces fit together, the interstitial aspects ("ma"?) were actually rather profound, often not for what they put in but rather for what they left out to make things work on a global scale.

That's why your slightly off-the-cuff remark startled me and made me ask. Thanks again for clarifying, and I hope my response is a positive contribution.

Just my 2 €-¢ and worth every single one of the two :-)

Marcel
Post by Alan Kay
Hi Marcel
I think I've already said a bit about the Web on this list -- mostly about the complete misunderstanding of the situation the web and browser designers had.
All the systems principles needed for a good design were already extant, but I don't think they were known to the designers, even though many of them were embedded in the actual computers and operating systems they used.
The simplest way to see what I'm talking about is to notice the many-many things that could be done on a personal computer/workstation that couldn't be done in the web & browser running on the very same personal computer/workstation. There was never any good reason for these differences.
Another way to look at this is from the point of view of "separation of concerns". A big question in any system is "how much does 'Part A' have to know about 'Part B' (and vice versa) in order to make things happen?" The web and browser designs fail on this really badly, and have forced set after set of weak conventions into larger and larger, but still weak browsers and, worse, onto zillions of web pages on the net.
Basically, one of the main parts of good systems design is to try to find ways to finesse safe actions without having to know much. So -- for example -- Squeak runs everywhere because it can carry all of its own resources with it, and the OS processes/address-spaces allow it to run safely, but do not have to know anything about Squeak to run it. Similarly Squeak does not have to know much to run on every machine - just how to get events, a display buffer, and to map its file conventions onto the local ones. On a bare machine, Squeak *is* the OS, etc. So much for old ideas from the 70s!
The main idea here is that a windowing 2.5 D UI can compose views from many sources into a "page". The sources can be opaque because they can even do their own rendering if needed. Since the sources can run in protected address-spaces their actions can be confined, and "we" the mini-OS running all this do not have to know anything about them. This is how apps work on personal computers, and there is no reason why things shouldn't work this way when the address-spaces come from other parts of the net. There would then be no difference between "local" and "global" apps.
Since parts of the address spaces can be externalized, indexing as rich (and richer) to what we have now still can be done.
And so forth.
The Native Client part of Chrome finally allows what should have been done in the first place (we are now about 20+ years after the first web proposals by Berners-Lee). However, this approach will need to be adopted by most of the already existing multiple browsers before it can really be used in a practical way in the world of personal computing -- and there are signs that there is not a lot of agreement or understanding why this would be a good thing.
The sad and odd thing is that so many people in the computer field were so lacking in "systems consciousness" that they couldn't see this, and failed to complain mightily as the web was being set up and a really painful genii was being let out of the bottle.
As Kurt Vonnegut used to say "And so it goes".
Cheers,
Alan
Sent: Sun, July 24, 2011 5:39:26 AM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam
[..]
There was one question I had on the scaling issue that would not have fitted in the Q&A: while praising the design of the Internet, you spoke less well of the World Wide Web, which surprised me a bit. Can you elaborate?
David Barbour
2011-07-25 19:29:06 UTC
Permalink
Post by Alan Kay
The main idea here is that a windowing 2.5 D UI can compose views from many
sources into a "page". The sources can be opaque because they can even do
their own rendering if needed. Since the sources can run in protected
address-spaces their actions can be confined, and "we" the mini-OS running
all this do not have to know anything about them.
This idea of 'opaque' applications in a sandbox may result in a flexible UI,
but not an especially composable or accessible one.

Consider the following desiderata:
* Accessibility - for screen-readers, search engines, language translators.
* Zoomability - we should be able to constrain client-side resource
consumption (CPU, bandwidth, memory) such that it is commensurate with user
attention (as measured in terms of screen real-estate for visuals, or volume
for audibles).
* Service mashups and customization - grease-monkey scripts and extensions
that modify an app require it have a clear structure.
* Occasionally connected computing - links fail, power isn't always up, we
might persist an app we aren't zoomed on at the moment.
* Mobility - an app should be able to follow users from one computer to
another.
* Bookmarking, Sharing, CSCW - users should be able to share access to
specific elements of their applications.
* Optimization - the later we perform optimizations, the more we can
achieve, especially if the language is designed for it. Access to the
underlying code can allow us to achieve higher levels of specialization.



This is how apps work on personal computers, and there is no reason why
Post by Alan Kay
things shouldn't work this way when the address-spaces come from other parts
of the net.
Except, being opaque is also part of how apps consistently *fail their users
* on personal computers. I.e. we should not be striving for apps as they
exist on personal computers. Something better is needed.

I agree that NaCl for Chromium is a promising technology. Though, I wonder
if hardware virtualization might be more widely accepted.

But the promise I see for NaCl isn't just rendering flexible apps to screen;
rather, I see potential for upgrading the basic web abstractions, breaking
away from HTTP+HTML+DOM+JS+CSS for something more robust and consistent (I'm
especially interested in a better DOM and communications protocol for live
documents). I.e. it would be easy to create 'portal' sites that effectively
grant access to a new Internet. This offers a 'gradual transition' strategy
that is not accessible today.
Post by Alan Kay
this [NaCl] approach will need to be adopted by most of the already
existing multiple browsers before it can really be used in a practical way
in the world of personal computing
I agree that this is a problem for apps that can be easily achieved in JS.
(And the domain of such applications will grow, given WebGL and WebCL and
improved quality for SVG.)

But for the applications where JS and WebGL are insufficient, mandating
Chrome as a platform/VM for an application should rarely be 'worse' than
providing your own executable and installer.
Post by Alan Kay
The sad and odd thing is that so many people in the computer field were so
lacking in "systems consciousness" that they couldn't see this
What's with the past tense?

Regards,

Dave
Alan Kay
2011-07-26 03:13:52 UTC
Permalink
There are several good points here ... and of course I don't mean that the
"software machines" that can use arbitrary hardware as caches have to or should
be totally opaque -- or that the poor integration on personal computers today
should be followed slavishly.

For example, we at PARC (or now in STEPS) did not have applications per se, but
what would be called today "mashups of useful objects" -- (this is what *real
objects* allow ...)

I think you can see just how your desiderata can be served by sending and
receiving *real objects* rather than just data structures or (now) adding
programs in one chosen language.

I hope my main point is not being lost here -- which is that good OS and
language design is partly about being able to recognize that there are too many
degrees of freedom and too many ideas and implementers to be able to serve their
needs. Long ago in my thesis I took the point of view that if one were going to
make a computer system for the Princeton Institute of Advanced Study, one should
make an extensible system that the Institute members could shape in the
directions they needed (this because it would be effectively impossible for the
computer scientists on the outside to effectively meet the needs -- so personal
computing would require widespread higher level interdisciplinary system making,
and it was the job of the computerists to provide the extensible structures that
the domain experts could make use of.

In the case of the web, we have the irony that it is the deep computerists (who
can make their own systems from scratch) who are being shut out. Javascript
could act as "an Alto" if some of the techniques used in Squeak (i.e. something
like SLANG) were added. But this is not the case at this point.

Best wishes,

Alan




________________________________
From: David Barbour <***@gmail.com>
To: Fundamentals of New Computing <***@vpri.org>
Sent: Mon, July 25, 2011 12:29:06 PM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam


On Sun, Jul 24, 2011 at 10:24 AM, Alan Kay <***@yahoo.com> wrote:

The main idea here is that a windowing 2.5 D UI can compose views from many
sources into a "page". The sources can be opaque because they can even do their
own rendering if needed. Since the sources can run in protected address-spaces
their actions can be confined, and "we" the mini-OS running all this do not have
to know anything about them.


This idea of 'opaque' applications in a sandbox may result in a flexible UI, but
not an especially composable or accessible one.

Consider the following desiderata:
* Accessibility - for screen-readers, search engines, language translators.
* Zoomability - we should be able to constrain client-side resource consumption
(CPU, bandwidth, memory) such that it is commensurate with user attention (as
measured in terms of screen real-estate for visuals, or volume for audibles).
* Service mashups and customization - grease-monkey scripts and extensions that
modify an app require it have a clear structure.
* Occasionally connected computing - links fail, power isn't always up, we might
persist an app we aren't zoomed on at the moment.
* Mobility - an app should be able to follow users from one computer to another.

* Bookmarking, Sharing, CSCW - users should be able to share access to specific
elements of their applications.
* Optimization - the later we perform optimizations, the more we can achieve,
especially if the language is designed for it. Access to the underlying code can
allow us to achieve higher levels of specialization.


This is how apps work on personal computers, and there is no reason why things
shouldn't work this way when the address-spaces come from other parts of the
net.

Except, being opaque is also part of how apps consistently fail their users on
personal computers. I.e. we should not be striving for apps as they exist on
personal computers. Something better is needed.

I agree that NaCl for Chromium is a promising technology. Though, I wonder if
hardware virtualization might be more widely accepted.

But the promise I see for NaCl isn't just rendering flexible apps to screen;
rather, I see potential for upgrading the basic web abstractions, breaking away
from HTTP+HTML+DOM+JS+CSS for something more robust and consistent (I'm
especially interested in a better DOM and communications protocol for live
documents). I.e. it would be easy to create 'portal' sites that effectively
grant access to a new Internet. This offers a 'gradual transition' strategy that
is not accessible today.
this [NaCl] approach will need to be adopted by most of the already existing
multiple browsers before it can really be used in a practical way in the world
of personal computing
I agree that this is a problem for apps that can be easily achieved in JS. (And
the domain of such applications will grow, given WebGL and WebCL and improved
quality for SVG.)

But for the applications where JS and WebGL are insufficient, mandating Chrome
as a platform/VM for an application should rarely be 'worse' than providing your
own executable and installer.
The sad and odd thing is that so many people in the computer field were so
lacking in "systems consciousness" that they couldn't see this
What's with the past tense?

Regards,

Dave
John Nilsson
2011-07-26 10:45:52 UTC
Permalink
Regarding languages it is refreahing then to see a well researched language
like Scala gain so much popularity. I would say that Scala is in a very good
position to maybe even replace Java as the language of choice.

BR,
John

p.s. Scalas parser combinator library provides a language very similar to
OMeta.

Sent from my phone
Den 26 jul 2011 04:23 skrev "Igor Stasenko" <***@gmail.com>:
Continue reading on narkive:
Loading...