Discussion:
[fonc] Alternative Web programming models?
Cornelius Toole
2011-05-26 18:38:56 UTC
Permalink
All,
A criticism by Dr. Kay, has really stuck with me. I can't remember the
specific criticism and where it's from, but I recall it being about the how
wrong the web programming model is. I imagine he was referring to how
disjointed, resource inefficient it is and how it only exposes a fraction of
the power and capability inherent in the average personal computer.

So Alan, anyone else,
what's wrong with the web programming mode and application architecture?
What programming model would work for a global-scale hypermedia system? What
prior research or commercial systems have any of these properties?

The web is about the closest we've seen to a ubiquitous deployment platform
for software, but the confluence of market forces and technical realities
endanger that ubiquity because users want full power of their devices plus
the availability of Internet connectivity.

-Cornelius
--
cornelius toole, jr. | ***@tigers.lsu.edu | mobile: 601.212.3045
Michael Forster
2011-05-26 19:24:08 UTC
Permalink
On Thu, May 26, 2011 at 1:38 PM, Cornelius Toole
Post by Cornelius Toole
All,
A criticism by Dr. Kay, has really stuck with me. I can't remember the
specific criticism and where it's from, but I recall it being about the how
wrong the web programming model is. I imagine he was referring to how
[...]


I, too, was thinking of his comments on the structure and development
of the Internet vs. more recent things like the web, while reading
this interview with Erlang inventors Joe Armstrong and Robert Virding.

http://www.infoq.com/interviews/armstrong-virding-erlang-future

I wonder if Dr. Kay agrees with Joe's sentiments about the web first
ignoring one of the basic aspects of TCP/IP only to reinvent it
(poorly) with more crud piled on top.

Mike
Merik Voswinkel
2011-05-26 21:53:21 UTC
Permalink
Dr Alan Kay addressed the html design a number of times in his
lectures and keynotes. Here are two:

[1] Alan Kay, How Complex is "Personal Computing"?". Normal"
Considered Harmful. October 22, 2009, Computer Science department at
UIUC.
http://media.cs.uiuc.edu/seminars/StateFarm-Kay-2009-10-22b.asx
(also see http://www.smalltalk.org.br/movies/ )

[2] Alan Kay, "The Computer Revolution Hasn't Happened Yet", October
7, 1997, OOPSLA'97 Keynote.
Transcript http://blog.moryton.net/2007/12/computer-revolution-hasnt-happened-yet.html
Video http://ftp.squeak.org/Media/AlanKay/Alan%20Kay%20at%20OOPSLA%201997%20-%20The%20computer%20revolution%20hasnt%20happened%20yet.avi
(also see http://www.smalltalk.org.br/movies/ )

Merik
Post by Cornelius Toole
All,
A criticism by Dr. Kay, has really stuck with me. I can't remember
the specific criticism and where it's from, but I recall it being
about the how wrong the web programming model is. I imagine he was
referring to how disjointed, resource inefficient it is and how it
only exposes a fraction of the power and capability inherent in the
average personal computer.
So Alan, anyone else,
what's wrong with the web programming mode and application
architecture? What programming model would work for a global-scale
hypermedia system? What prior research or commercial systems have
any of these properties?
The web is about the closest we've seen to a ubiquitous deployment
platform for software, but the confluence of market forces and
technical realities endanger that ubiquity because users want full
power of their devices plus the availability of Internet connectivity.
-Cornelius
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Cornelius Toole
2011-05-31 14:16:20 UTC
Permalink
Thanks Merik,

I've read/watch the OOPSLA'97 keynote before, but hadn't seen the first
video.
I'm having problems with the first one(the talk at UIUC). Has anyone been
able to watch past the first hour. I get up to the point where Alex speaks
and it freezes.

I've just recently read Roy Fielding's dissertation on the architecture of
the Web. Two prominent features of web architecture are the (1)
client-server hierarchical style and (2) the layering abstraction style. My
take away from that is how all of abstraction layers of the web software
stack get in the way of the applications that want to use the machine. Style
1 is counter to the notion of the 'no centers' principle and is very
limiting when you consider different classes of applications that might
involve many entities with ill-defined relationships. Style 2, provides for
separation of concerns and supports integration with legacy systems, but
incurs so much overhead in terms of structural complexity and performance. I
think the stuff about web sockets and what was discussed in the Erlang
interview that Micheal linked to in the 1st reply is relevant here. The web
was designed for large grain interaction between entities, but many
application domain problems don't map to that. Some people just want pipes
or channels to exchange messages for fine-grained interactions, but the
layer cake doesn't allow it. This is where you get the feeling that the
architecture for rich web apps is no-architecture, just piling big stones
atop one another.

I think it would be very interesting for someone to take the same approach
to networked-based application as Gezira did with graphics (or the STEP
project in general) as far assessing what's needed in a modern
Internet-scale hypermedia architecture.
Dr Alan Kay addressed the html design a number of times in his lectures and
[1] Alan Kay, How Complex is "Personal Computing"?". Normal" Considered
Harmful. October 22, 2009, Computer Science department at UIUC.
http://media.cs.uiuc.edu/seminars/StateFarm-Kay-2009-10-22b.asx
(also see http://www.smalltalk.org.br/movies/ )
[2] Alan Kay, "The Computer Revolution Hasn't Happened Yet", October 7,
1997, OOPSLA'97 Keynote.
Transcript
http://blog.moryton.net/2007/12/computer-revolution-hasnt-happened-yet.html
Video
http://ftp.squeak.org/Media/AlanKay/Alan%20Kay%20at%20OOPSLA%201997%20-%20The%20computer%20revolution%20hasnt%20happened%20yet.avi
(also see http://www.smalltalk.org.br/movies/ )
Merik
All,
A criticism by Dr. Kay, has really stuck with me. I can't remember the
specific criticism and where it's from, but I recall it being about the how
wrong the web programming model is. I imagine he was referring to how
disjointed, resource inefficient it is and how it only exposes a fraction of
the power and capability inherent in the average personal computer.
So Alan, anyone else,
what's wrong with the web programming mode and application architecture?
What programming model would work for a global-scale hypermedia system? What
prior research or commercial systems have any of these properties?
The web is about the closest we've seen to a ubiquitous deployment platform
for software, but the confluence of market forces and technical realities
endanger that ubiquity because users want full power of their devices plus
the availability of Internet connectivity.
-Cornelius
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
cornelius toole, jr. | ***@tigers.lsu.edu | mobile: 601.212.3045
Alan Kay
2011-05-31 14:30:11 UTC
Permalink
Hi Cornelius

There are lots of egregiously wrong things in the web design. Perhaps one of the
simplest is that the browser folks have lacked the perspective to see that the
browser is not like an application, but like an OS. i.e. what it really needs to
do is to take in and run foreign code (including low level code) safely and
coordinate outputs to the screen (Google is just starting to realize this with
NaCl after much prodding and beating.)

I think everyone can see the implications of these two perspectives and what
they enable or block

Cheers,

Alan




________________________________
From: Cornelius Toole <***@gmail.com>
To: Fundamentals of New Computing <***@vpri.org>
Sent: Tue, May 31, 2011 7:16:20 AM
Subject: Re: [fonc] Alternative Web programming models?

Thanks Merik,

I've read/watch the OOPSLA'97 keynote before, but hadn't seen the first video.
I'm having problems with the first one(the talk at UIUC). Has anyone been able
to watch past the first hour. I get up to the point where Alex speaks and it
freezes.

I've just recently read Roy Fielding's dissertation on the architecture of the
Web. Two prominent features of web architecture are the (1) client-server
hierarchical style and (2) the layering abstraction style. My take away from
that is how all of abstraction layers of the web software stack get in the way
of the applications that want to use the machine. Style 1 is counter to the
notion of the 'no centers' principle and is very limiting when you consider
different classes of applications that might involve many entities with
ill-defined relationships. Style 2, provides for separation of concerns and
supports integration with legacy systems, but incurs so much overhead in terms
of structural complexity and performance. I think the stuff about web sockets
and what was discussed in the Erlang interview that Micheal linked to in the 1st
reply is relevant here. The web was designed for large grain interaction between
entities, but many application domain problems don't map to that. Some people
just want pipes or channels to exchange messages for fine-grained interactions,
but the layer cake doesn't allow it. This is where you get the feeling that the
architecture for rich web apps is no-architecture, just piling big stones atop
one another.

I think it would be very interesting for someone to take the same approach to
networked-based application as Gezira did with graphics (or the STEP project in
general) as far assessing what's needed in a modern Internet-scale hypermedia
architecture.




On Thu, May 26, 2011 at 4:53 PM, Merik Voswinkel <***@knoware.nl> wrote:

Dr Alan Kay addressed the html design a number of times in his lectures and
[1] Alan Kay, How Complex is "Personal Computing"?". Normal" Considered Harmful.
October 22, 2009, Computer Science department at UIUC.
http://media.cs.uiuc.edu/seminars/StateFarm-Kay-2009-10-22b.asx
(also see http://www.smalltalk.org.br/movies/ )
[2] Alan Kay, "The Computer Revolution Hasn't Happened Yet", October 7, 1997,
OOPSLA'97 Keynote.
Transcript
http://blog.moryton.net/2007/12/computer-revolution-hasnt-happened-yet.html
Video
http://ftp.squeak.org/Media/AlanKay/Alan%20Kay%20at%20OOPSLA%201997%20-%20The%20computer%20revolution%20hasnt%20happened%20yet.avi
(also see http://www.smalltalk.org.br/movies/ )
Merik
All,A criticism by Dr. Kay, has really stuck with me. I can't remember the
specific criticism and where it's from, but I recall it being about the how
wrong the web programming model is. I imagine he was referring to how
disjointed, resource inefficient it is and how it only exposes a fraction of the
power and capability inherent in the average personal computer.
Post by Cornelius Toole
So Alan, anyone else,
what's wrong with the web programming mode and application architecture? What
programming model would work for a global-scale hypermedia system? What prior
research or commercial systems have any of these properties?
The web is about the closest we've seen to a ubiquitous deployment platform for
software, but the confluence of market forces and technical realities endanger
that ubiquity because users want full power of their devices plus the
availability of Internet connectivity.
-Cornelius
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
cornelius toole, jr. | ***@tigers.lsu.edu | mobile: 601.212.3045
David Harris
2011-05-31 15:47:39 UTC
Permalink
Didn't this debate happen with windowing systems (eg X vs NeWS, dumb vs
smart windows-server).

David
Post by Alan Kay
Hi Cornelius
There are lots of egregiously wrong things in the web design. Perhaps one
of the simplest is that the browser folks have lacked the perspective to see
that the browser is not like an application, but like an OS. i.e. what it
really needs to do is to take in and run foreign code (including low level
code) safely and coordinate outputs to the screen (Google is just starting
to realize this with NaCl after much prodding and beating.)
I think everyone can see the implications of these two perspectives and
what they enable or block
Cheers,
Alan
------------------------------
*Sent:* Tue, May 31, 2011 7:16:20 AM
*Subject:* Re: [fonc] Alternative Web programming models?
Thanks Merik,
I've read/watch the OOPSLA'97 keynote before, but hadn't seen the first video.
I'm having problems with the first one(the talk at UIUC). Has anyone been
able to watch past the first hour. I get up to the point where Alex speaks
and it freezes.
I've just recently read Roy Fielding's dissertation on the architecture of
the Web. Two prominent features of web architecture are the (1)
client-server hierarchical style and (2) the layering abstraction style. My
take away from that is how all of abstraction layers of the web software
stack get in the way of the applications that want to use the machine. Style
1 is counter to the notion of the 'no centers' principle and is very
limiting when you consider different classes of applications that might
involve many entities with ill-defined relationships. Style 2, provides for
separation of concerns and supports integration with legacy systems, but
incurs so much overhead in terms of structural complexity and performance. I
think the stuff about web sockets and what was discussed in the Erlang
interview that Micheal linked to in the 1st reply is relevant here. The web
was designed for large grain interaction between entities, but many
application domain problems don't map to that. Some people just want pipes
or channels to exchange messages for fine-grained interactions, but the
layer cake doesn't allow it. This is where you get the feeling that the
architecture for rich web apps is no-architecture, just piling big stones
atop one another.
I think it would be very interesting for someone to take the same approach
to networked-based application as Gezira did with graphics (or the STEP
project in general) as far assessing what's needed in a modern
Internet-scale hypermedia architecture.
Dr Alan Kay addressed the html design a number of times in his lectures
[1] Alan Kay, How Complex is "Personal Computing"?". Normal" Considered
Harmful. October 22, 2009, Computer Science department at UIUC.
http://media.cs.uiuc.edu/seminars/StateFarm-Kay-2009-10-22b.asx
(also see http://www.smalltalk.org.br/movies/ )
[2] Alan Kay, "The Computer Revolution Hasn't Happened Yet", October 7,
1997, OOPSLA'97 Keynote.
Transcript
http://blog.moryton.net/2007/12/computer-revolution-hasnt-happened-yet.html
Video
http://ftp.squeak.org/Media/AlanKay/Alan%20Kay%20at%20OOPSLA%201997%20-%20The%20computer%20revolution%20hasnt%20happened%20yet.avi
(also see http://www.smalltalk.org.br/movies/ )
Merik
All,
A criticism by Dr. Kay, has really stuck with me. I can't remember the
specific criticism and where it's from, but I recall it being about the how
wrong the web programming model is. I imagine he was referring to how
disjointed, resource inefficient it is and how it only exposes a fraction of
the power and capability inherent in the average personal computer.
So Alan, anyone else,
what's wrong with the web programming mode and application architecture?
What programming model would work for a global-scale hypermedia system? What
prior research or commercial systems have any of these properties?
The web is about the closest we've seen to a ubiquitous deployment
platform for software, but the confluence of market forces and technical
realities endanger that ubiquity because users want full power of their
devices plus the availability of Internet connectivity.
-Cornelius
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Alan Kay
2011-05-31 16:29:35 UTC
Permalink
Sure, and much earlier too ... perhaps goes all the way back to Licklider's 1963
memo about "The Intergalactic Network", where he not only meant "big", but
"(inter) communicating with aliens" (in this case alien code).

Once you have a network of heterogeneous machines, one POV leads to the idea of
using them as caches for computations made from protected processes that are
loosely coupled via some form of messaging. The software in each machine handles
just a few things having to do with resource allocation/sharing and network
connections, etc. Everything else is done by the "floating processes".


X is less of a good example than perhaps Gerry Popek's LOCUS system in the early
80s, which was a kind of distributed networked heterogeneous Unix with floating
processes which could dynamically migrate while computing. There is a good book
by Popek from MIT Press ...

Basic idea here is to allow both vanilla programmers and trailblazers to be able
to do what they do best on the same system.

Cheers,

Alan




________________________________
From: David Harris <***@telus.net>
To: Fundamentals of New Computing <***@vpri.org>
Sent: Tue, May 31, 2011 8:47:39 AM
Subject: Re: [fonc] Alternative Web programming models?


Didn't this debate happen with windowing systems (eg X vs NeWS, dumb vs smart
windows-server).


David



On Tue, May 31, 2011 at 7:30 AM, Alan Kay <***@yahoo.com> wrote:

Hi Cornelius
Post by Alan Kay
There are lots of egregiously wrong things in the web design. Perhaps one of the
simplest is that the browser folks have lacked the perspective to see that the
browser is not like an application, but like an OS. i.e. what it really needs to
do is to take in and run foreign code (including low level code) safely and
coordinate outputs to the screen (Google is just starting to realize this with
NaCl after much prodding and beating.)
I think everyone can see the implications of these two perspectives and what
they enable or block
Cheers,
Alan
________________________________
Post by Alan Kay
Sent: Tue, May 31, 2011 7:16:20 AM
Subject: Re: [fonc] Alternative Web programming models?
Thanks Merik,
I've read/watch the OOPSLA'97 keynote before, but hadn't seen the first video.
I'm having problems with the first one(the talk at UIUC). Has anyone been able
to watch past the first hour. I get up to the point where Alex speaks and it
freezes.
I've just recently read Roy Fielding's dissertation on the architecture of the
Web. Two prominent features of web architecture are the (1) client-server
hierarchical style and (2) the layering abstraction style. My take away from
that is how all of abstraction layers of the web software stack get in the way
of the applications that want to use the machine. Style 1 is counter to the
notion of the 'no centers' principle and is very limiting when you consider
different classes of applications that might involve many entities with
ill-defined relationships. Style 2, provides for separation of concerns and
supports integration with legacy systems, but incurs so much overhead in terms
of structural complexity and performance. I think the stuff about web sockets
and what was discussed in the Erlang interview that Micheal linked to in the 1st
reply is relevant here. The web was designed for large grain interaction between
entities, but many application domain problems don't map to that. Some people
just want pipes or channels to exchange messages for fine-grained interactions,
but the layer cake doesn't allow it. This is where you get the feeling that the
architecture for rich web apps is no-architecture, just piling big stones atop
one another.
I think it would be very interesting for someone to take the same approach to
networked-based application as Gezira did with graphics (or the STEP project in
general) as far assessing what's needed in a modern Internet-scale hypermedia
architecture.
Dr Alan Kay addressed the html design a number of times in his lectures and
[1] Alan Kay, How Complex is "Personal Computing"?". Normal" Considered Harmful.
October 22, 2009, Computer Science department at UIUC.
http://media.cs.uiuc.edu/seminars/StateFarm-Kay-2009-10-22b.asx
(also see http://www.smalltalk.org.br/movies/ )
[2] Alan Kay, "The Computer Revolution Hasn't Happened Yet", October 7, 1997,
OOPSLA'97 Keynote.
Transcript
http://blog.moryton.net/2007/12/computer-revolution-hasnt-happened-yet.html
Video
http://ftp.squeak.org/Media/AlanKay/Alan%20Kay%20at%20OOPSLA%201997%20-%20The%20computer%20revolution%20hasnt%20happened%20yet.avi
(also see http://www.smalltalk.org.br/movies/ )
Merik
All,A criticism by Dr. Kay, has really stuck with me. I can't remember the
specific criticism and where it's from, but I recall it being about the how
wrong the web programming model is. I imagine he was referring to how
disjointed, resource inefficient it is and how it only exposes a fraction of the
power and capability inherent in the average personal computer.
Post by Cornelius Toole
So Alan, anyone else,
what's wrong with the web programming mode and application architecture? What
programming model would work for a global-scale hypermedia system? What prior
research or commercial systems have any of these properties?
The web is about the closest we've seen to a ubiquitous deployment platform for
software, but the confluence of market forces and technical realities endanger
that ubiquity because users want full power of their devices plus the
availability of Internet connectivity.
-Cornelius
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Cornelius Toole
2011-05-31 21:00:08 UTC
Permalink
Thanks for the pointers, Alan. The LOCUS stuff looks interesting.

WRT to the web browser as OS not application, you'd think Google would've
pushed ChromeOS further in that direction. I will say that modern browsers
consume memory like they're full-blown OSes(Chrome is using about 922MB of
memory on my machine). Maybe that's on their roadmap, but won't be
plausible until NaCl is more mature. ChromeOS won't be interesting to me as
a platform unless it can enable web media to shine in a way that it couldn't
just as something in the browser on traditional PC OSes (e.g. WebGL
performance on ChromeOS should approach accelerated graphics performance of
a native OpenGL program) .

I think what we're facing is people who are trying to make the web model
subsume all(or too much) of computing, which then limits the applications
that can be built based on that model. Is it possible design and deploy an
architecture that supersets the web architecture. Take for instance, suppose
client-server or point-to-point communication is a special-case supported by
some peer-to-peer architecture. Maybe the DOM could be an instance of a more
expressive, compact presentation model and API.

I'm just wondering how to get from where we are to someplace better, but
want to consult those who may know of good maps already.

-Cornelius
Post by Alan Kay
Sure, and much earlier too ... perhaps goes all the way back to Licklider's
1963 memo about "The Intergalactic Network", where he not only meant "big",
but "(inter) communicating with aliens" (in this case alien code).
Once you have a network of heterogeneous machines, one POV leads to the
idea of using them as caches for computations made from protected processes
that are loosely coupled via some form of messaging. The software in each
machine handles just a few things having to do with resource
allocation/sharing and network connections, etc. Everything else is done by
the "floating processes".
X is less of a good example than perhaps Gerry Popek's LOCUS system in the
early 80s, which was a kind of distributed networked heterogeneous Unix with
floating processes which could dynamically migrate while computing. There is
a good book by Popek from MIT Press ...
Basic idea here is to allow both vanilla programmers and trailblazers to be
able to do what they do best on the same system.
Cheers,
Alan
------------------------------
*Sent:* Tue, May 31, 2011 8:47:39 AM
*Subject:* Re: [fonc] Alternative Web programming models?
Didn't this debate happen with windowing systems (eg X vs NeWS, dumb vs
smart windows-server).
David
Post by Alan Kay
Hi Cornelius
There are lots of egregiously wrong things in the web design. Perhaps one
of the simplest is that the browser folks have lacked the perspective to see
that the browser is not like an application, but like an OS. i.e. what it
really needs to do is to take in and run foreign code (including low level
code) safely and coordinate outputs to the screen (Google is just starting
to realize this with NaCl after much prodding and beating.)
I think everyone can see the implications of these two perspectives and
what they enable or block
Cheers,
Alan
------------------------------
*Sent:* Tue, May 31, 2011 7:16:20 AM
*Subject:* Re: [fonc] Alternative Web programming models?
Thanks Merik,
I've read/watch the OOPSLA'97 keynote before, but hadn't seen the first video.
I'm having problems with the first one(the talk at UIUC). Has anyone been
able to watch past the first hour. I get up to the point where Alex speaks
and it freezes.
I've just recently read Roy Fielding's dissertation on the architecture of
the Web. Two prominent features of web architecture are the (1)
client-server hierarchical style and (2) the layering abstraction style. My
take away from that is how all of abstraction layers of the web software
stack get in the way of the applications that want to use the machine. Style
1 is counter to the notion of the 'no centers' principle and is very
limiting when you consider different classes of applications that might
involve many entities with ill-defined relationships. Style 2, provides for
separation of concerns and supports integration with legacy systems, but
incurs so much overhead in terms of structural complexity and performance. I
think the stuff about web sockets and what was discussed in the Erlang
interview that Micheal linked to in the 1st reply is relevant here. The web
was designed for large grain interaction between entities, but many
application domain problems don't map to that. Some people just want pipes
or channels to exchange messages for fine-grained interactions, but the
layer cake doesn't allow it. This is where you get the feeling that the
architecture for rich web apps is no-architecture, just piling big stones
atop one another.
I think it would be very interesting for someone to take the same approach
to networked-based application as Gezira did with graphics (or the STEP
project in general) as far assessing what's needed in a modern
Internet-scale hypermedia architecture.
Dr Alan Kay addressed the html design a number of times in his lectures
[1] Alan Kay, How Complex is "Personal Computing"?". Normal" Considered
Harmful. October 22, 2009, Computer Science department at UIUC.
http://media.cs.uiuc.edu/seminars/StateFarm-Kay-2009-10-22b.asx
(also see http://www.smalltalk.org.br/movies/ )
[2] Alan Kay, "The Computer Revolution Hasn't Happened Yet", October 7,
1997, OOPSLA'97 Keynote.
Transcript
http://blog.moryton.net/2007/12/computer-revolution-hasnt-happened-yet.html
Video
http://ftp.squeak.org/Media/AlanKay/Alan%20Kay%20at%20OOPSLA%201997%20-%20The%20computer%20revolution%20hasnt%20happened%20yet.avi
(also see http://www.smalltalk.org.br/movies/ )
Merik
All,
A criticism by Dr. Kay, has really stuck with me. I can't remember the
specific criticism and where it's from, but I recall it being about the how
wrong the web programming model is. I imagine he was referring to how
disjointed, resource inefficient it is and how it only exposes a fraction of
the power and capability inherent in the average personal computer.
So Alan, anyone else,
what's wrong with the web programming mode and application architecture?
What programming model would work for a global-scale hypermedia system? What
prior research or commercial systems have any of these properties?
The web is about the closest we've seen to a ubiquitous deployment
platform for software, but the confluence of market forces and technical
realities endanger that ubiquity because users want full power of their
devices plus the availability of Internet connectivity.
-Cornelius
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
cornelius toole, jr. | ***@tigers.lsu.edu | mobile: 601.212.3045
Frederick Grose
2011-06-01 01:40:18 UTC
Permalink
Post by Alan Kay
Sure, and much earlier too ... perhaps goes all the way back to Licklider's
1963 memo about "The Intergalactic Network", where he not only meant "big",
but "(inter) communicating with aliens" (in this case alien code).
We've reserved a release name for this in Sugar:
http://wiki.sugarlabs.org/go/Taxonomy#Galactose:_a_future_Sugar_base_designed_for_alternate_computing_forms
:) --Fred

Once you have a network of heterogeneous machines, one POV leads to the idea
Post by Alan Kay
of using them as caches for computations made from protected processes that
are loosely coupled via some form of messaging. The software in each machine
handles just a few things having to do with resource allocation/sharing and
network connections, etc. Everything else is done by the "floating
processes".
X is less of a good example than perhaps Gerry Popek's LOCUS system in the
early 80s, which was a kind of distributed networked heterogeneous Unix with
floating processes which could dynamically migrate while computing. There is
a good book by Popek from MIT Press ...
Basic idea here is to allow both vanilla programmers and trailblazers to be
able to do what they do best on the same system.
Cheers,
Alan
------------------------------
*Sent:* Tue, May 31, 2011 8:47:39 AM
*Subject:* Re: [fonc] Alternative Web programming models?
Didn't this debate happen with windowing systems (eg X vs NeWS, dumb vs
smart windows-server).
David
Post by Alan Kay
Hi Cornelius
There are lots of egregiously wrong things in the web design. Perhaps one
of the simplest is that the browser folks have lacked the perspective to see
that the browser is not like an application, but like an OS. i.e. what it
really needs to do is to take in and run foreign code (including low level
code) safely and coordinate outputs to the screen (Google is just starting
to realize this with NaCl after much prodding and beating.)
I think everyone can see the implications of these two perspectives and
what they enable or block
Cheers,
Alan
------------------------------
*Sent:* Tue, May 31, 2011 7:16:20 AM
*Subject:* Re: [fonc] Alternative Web programming models?
Thanks Merik,
I've read/watch the OOPSLA'97 keynote before, but hadn't seen the first video.
I'm having problems with the first one(the talk at UIUC). Has anyone been
able to watch past the first hour. I get up to the point where Alex speaks
and it freezes.
I've just recently read Roy Fielding's dissertation on the architecture of
the Web. Two prominent features of web architecture are the (1)
client-server hierarchical style and (2) the layering abstraction style. My
take away from that is how all of abstraction layers of the web software
stack get in the way of the applications that want to use the machine. Style
1 is counter to the notion of the 'no centers' principle and is very
limiting when you consider different classes of applications that might
involve many entities with ill-defined relationships. Style 2, provides for
separation of concerns and supports integration with legacy systems, but
incurs so much overhead in terms of structural complexity and performance. I
think the stuff about web sockets and what was discussed in the Erlang
interview that Micheal linked to in the 1st reply is relevant here. The web
was designed for large grain interaction between entities, but many
application domain problems don't map to that. Some people just want pipes
or channels to exchange messages for fine-grained interactions, but the
layer cake doesn't allow it. This is where you get the feeling that the
architecture for rich web apps is no-architecture, just piling big stones
atop one another.
I think it would be very interesting for someone to take the same approach
to networked-based application as Gezira did with graphics (or the STEP
project in general) as far assessing what's needed in a modern
Internet-scale hypermedia architecture.
Dr Alan Kay addressed the html design a number of times in his lectures
[1] Alan Kay, How Complex is "Personal Computing"?". Normal" Considered
Harmful. October 22, 2009, Computer Science department at UIUC.
http://media.cs.uiuc.edu/seminars/StateFarm-Kay-2009-10-22b.asx
(also see http://www.smalltalk.org.br/movies/ )
[2] Alan Kay, "The Computer Revolution Hasn't Happened Yet", October 7,
1997, OOPSLA'97 Keynote.
Transcript
http://blog.moryton.net/2007/12/computer-revolution-hasnt-happened-yet.html
Video
http://ftp.squeak.org/Media/AlanKay/Alan%20Kay%20at%20OOPSLA%201997%20-%20The%20computer%20revolution%20hasnt%20happened%20yet.avi
(also see http://www.smalltalk.org.br/movies/ )
Merik
All,
A criticism by Dr. Kay, has really stuck with me. I can't remember the
specific criticism and where it's from, but I recall it being about the how
wrong the web programming model is. I imagine he was referring to how
disjointed, resource inefficient it is and how it only exposes a fraction of
the power and capability inherent in the average personal computer.
So Alan, anyone else,
what's wrong with the web programming mode and application architecture?
What programming model would work for a global-scale hypermedia system? What
prior research or commercial systems have any of these properties?
The web is about the closest we've seen to a ubiquitous deployment
platform for software, but the confluence of market forces and technical
realities endanger that ubiquity because users want full power of their
devices plus the availability of Internet connectivity.
-Cornelius
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Jonas Pfenniger (zimbatm)
2011-06-03 17:20:01 UTC
Permalink
Relatedly, what struck me while following the HTML5 spec development
is how they decided to specify existing browser behavior and add more
to the plate. Instead, they could have tried to decompose existing
elements to a smaller subset that would be more easily documented.
Maybe not down to assembly, but CSS, HTML and JavaScript could be made
more manageable by re-defining feature in terms of smaller-ones.
Post by Alan Kay
Hi Cornelius
There are lots of egregiously wrong things in the web design. Perhaps one of
the simplest is that the browser folks have lacked the perspective to see
that the browser is not like an application, but like an OS. i.e. what it
really needs to do is to take in and run foreign code (including low level
code) safely and coordinate outputs to the screen (Google is just starting
to realize this with NaCl after much prodding and beating.)
I think everyone can see the implications of these two perspectives and what
they enable or block
Cheers,
Alan
________________________________
Sent: Tue, May 31, 2011 7:16:20 AM
Subject: Re: [fonc] Alternative Web programming models?
Thanks Merik,
I've read/watch the OOPSLA'97 keynote before, but hadn't seen the first video.
I'm having problems with the first one(the talk at UIUC). Has anyone been
able to watch past the first hour. I get up to the point where Alex speaks
and it freezes.
I've just recently read Roy Fielding's dissertation on the architecture of
the Web. Two prominent features of web architecture are the (1)
client-server hierarchical style and (2) the layering abstraction style. My
take away from that is how all of abstraction layers of the web software
stack get in the way of the applications that want to use the machine. Style
1 is counter to the notion of the 'no centers' principle and is very
limiting when you consider different classes of applications that might
involve many entities with ill-defined relationships. Style 2, provides for
separation of concerns and supports integration with legacy systems, but
incurs so much overhead in terms of structural complexity and performance. I
think the stuff about web sockets and what was discussed in the Erlang
interview that Micheal linked to in the 1st reply is relevant here. The web
was designed for large grain interaction between entities, but many
application domain problems don't map to that. Some people just want pipes
or channels to exchange messages for fine-grained interactions, but the
layer cake doesn't allow it. This is where you get the feeling that the
architecture for rich web apps is no-architecture, just piling big stones
atop one another.
I think it would be very interesting for someone to take the same approach
to networked-based application as Gezira did with graphics (or the STEP
project in general) as far assessing what's needed in a modern
Internet-scale hypermedia architecture.
Dr Alan Kay addressed the html design a number of times in his lectures
[1] Alan Kay, How Complex is "Personal Computing"?". Normal" Considered
Harmful. October 22, 2009, Computer Science department at UIUC.
     http://media.cs.uiuc.edu/seminars/StateFarm-Kay-2009-10-22b.asx
    (also see http://www.smalltalk.org.br/movies/ )
[2] Alan Kay, "The Computer Revolution Hasn't Happened Yet", October 7,
1997, OOPSLA'97 Keynote.
     Transcript
http://blog.moryton.net/2007/12/computer-revolution-hasnt-happened-yet.html
     Video
http://ftp.squeak.org/Media/AlanKay/Alan%20Kay%20at%20OOPSLA%201997%20-%20The%20computer%20revolution%20hasnt%20happened%20yet.avi
     (also see http://www.smalltalk.org.br/movies/ )
Merik
All,
A criticism by Dr. Kay, has really stuck with me. I can't remember the
specific criticism and where it's from, but I recall it being about the how
wrong the web programming model is. I imagine he was referring to how
disjointed, resource inefficient it is and how it only exposes a fraction of
the power and capability inherent in the average personal computer.
So Alan, anyone else,
what's wrong with the web programming mode and application architecture?
What programming model would work for a global-scale hypermedia system? What
prior research or commercial systems have any of these properties?
The web is about the closest we've seen to a ubiquitous deployment
platform for software, but the confluence of market forces and technical
realities endanger that ubiquity because users want full power of their
devices plus the availability of Internet connectivity.
-Cornelius
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
C. Scott Ananian
2011-06-03 17:58:22 UTC
Permalink
Post by Alan Kay
There are lots of egregiously wrong things in the web design. Perhaps one of
the simplest is that the browser folks have lacked the perspective to see
that the browser is not like an application, but like an OS. i.e. what it
really needs to do is to take in and run foreign code (including low level
code) safely and coordinate outputs to the screen (Google is just starting
to realize this with NaCl after much prodding and beating.)
The web is not *only* an OS. It also provides the backing data for a
very large unstructured database. Google of course realize this, as
their company rests on a search engine. The semantic web folks have
tried in vain to get people to add more structure to the database.
What the "web OS" must do is allow the efficient export of additional
*unstructured and ad hoc* data. HTML+CSS web applications today are
moderately good at this -- the images stored in flickr (say) are still
in standard crawlable formats, and they show up in search results.
Google's Native Client (as well as prior sandbox technologies, such as
Java, etc) is *not* good at this (yet?). The graphics rendered in
NativeClient are completely invisible to search engines -- and thus
resources created in these apps are impossible to index. You can
build a web app *alongside* Native Client in order to export data
created in the sandboxed app -- but now you're just doubling the
effort.

Like it or not, the messy stew of HTML+CSS is the closest we have to a
universal GUI canvas, loaded with equally-messy semantics -- but
enough that I can take the source code for a (say) flickr or youtube
page and extract the comment text and photos/video. No rival "web
application" or "web as OS" framework (which is not itself built on
HTML+CSS) can do that.
--scott

ps. the closest rival to HTML is RSS/Atom -- a more limited format,
but it had it's own search engines and tools (see
http://www.opensearch.org). A "web OS" which could still export its
underlying data structures/files as a indexable/sharable/browsable RSS
feed would be more competitive than a pure sandbox. Another possible
avenue is exporting from your "web OS" something close to a
"filesystem" -- ie, a list of downloadable "documents", nothing more
-- and letting the search engine "peek inside" the standard format of
the documents to construct an index, as Google can currently do with
PDF, XLS, and other common file formats. But this gives up the idea
of the 'hyperlink' -- now I know there's a binary blob somewhere with
information relevant to my search, but it comes without any code to
edit/view/collaborate. (For a little more detail on the "export as
RSS" aspect of this, you might check out "The Journal, Reloaded" at
http://cscott.net/Publications/)
--
      ( http://cscott.net )
Michael Forster
2011-06-03 18:19:17 UTC
Permalink
On Fri, Jun 3, 2011 at 12:58 PM, C. Scott Ananian <***@laptop.org> wrote:
[...]
The web is not *only* an OS.  It also provides the backing data for a
very large unstructured database.  Google of course realize this, as
their company rests on a search engine.  The semantic web folks have
tried in vain to get people to add more structure to the database.
What the "web OS" must do is allow the efficient export of additional
*unstructured and ad hoc* data.  HTML+CSS web applications today are
[...]

Sorry for the tanget, but there is no such thing as an "unstructured
database." Whether talking about the logical or physical level, a
database is a specfication of data structure (and constraints upon
that data). Dr. Kay once characterised computing as a "pop culture,"
and statements such as the above reflect that.

Regards,

Mike
C. Scott Ananian
2011-06-03 20:35:45 UTC
Permalink
Post by Michael Forster
The web is not *only* an OS.  It also provides the backing data for a
very large unstructured database.  Google of course realize this, as
their company rests on a search engine.  The semantic web folks have
Sorry for the tanget, but there is no such thing as an "unstructured
database."  Whether talking about the logical or physical level, a
database is a specfication of data structure (and constraints upon
that data).  Dr. Kay once characterised computing as a "pop culture,"
and statements such as the above reflect that.
I'm assuming you didn't mean to be insulting. Yes, "unstructured
database" is a bit of an oxymoron, and I intentionally used the words
in this clever way, which humans can not only interpret with ease but
often find amusing.

Of course, the web *does* have a lot of structure, but it is human
structure, not well defined with formal semantics. Like the phrase
"unstructured database", humans have no trouble understanding pages on
the web, even if the formal semantics gets hairy or even
contradictory. "There are multiple <h1> tags on this page! What
happened to my organizational hierarchy?"

By your characterization, the web also belongs to "pop culture"
computing. The web is also the most useful computing environment of
our time, and I think it unwise to dismiss it.
--scott
--
      ( http://cscott.net )
Michael Forster
2011-06-03 20:59:11 UTC
Permalink
On Fri, Jun 3, 2011 at 3:35 PM, C. Scott Ananian <***@laptop.org> wrote:
[...]
I'm assuming you didn't mean to be insulting.  Yes, "unstructured
database" is a bit of an oxymoron, and I intentionally used the words
in this clever way, which humans can not only interpret with ease but
often find amusing.
... or conflate with ease making for easy confusion among those who
don't know better. Biologist choose, carefully, to speak of
"phenotype" or "genotype." Category theorists speak of a "monad" and
its specific laws rather than, loosely, of a "container thingy."
Neurosurgeons are immensely careful to say "postganglionic" or
"preganglionic" fiber rather than just "nerve" since the difference
can lead to paralysis. And, everyday, people in computing industry
and academia speak and reason with a sloppiness that would get a
medical intern kicked off a rotation. That's "pop culture."

Regards,

Mike
Julian Leviston
2011-06-04 17:24:21 UTC
Permalink
Post by Michael Forster
[...]
Post by C. Scott Ananian
I'm assuming you didn't mean to be insulting. Yes, "unstructured
database" is a bit of an oxymoron, and I intentionally used the words
in this clever way, which humans can not only interpret with ease but
often find amusing.
... or conflate with ease making for easy confusion among those who
don't know better. Biologist choose, carefully, to speak of
"phenotype" or "genotype." Category theorists speak of a "monad" and
its specific laws rather than, loosely, of a "container thingy."
Neurosurgeons are immensely careful to say "postganglionic" or
"preganglionic" fiber rather than just "nerve" since the difference
can lead to paralysis. And, everyday, people in computing industry
and academia speak and reason with a sloppiness that would get a
medical intern kicked off a rotation. That's "pop culture."
Regards,
Mike
A database can be more or less RIGID, and it's fairly understandable that "structured" means this in this context. The fact that a database IS a database automatically implies that it has SOME structure. If it wasn't a database, it'd be a piece of paper, which in turn actually has a structure, too.

You're being a bit obstructionist, I fear, and I'm not entirely sure why you are being this way.

I think computing is nowadays the domain of the populace (popular), so it's inevitable that a culture that evolves around computing will gain traction as being fairly popular. Yes, it sucks from one point of view. This is what we have to deal with, but it's also quite good, because we get to deal with more of the middle of the bell curve of the populace (and therefore the general level of the consciousness-awareness of humanity) than most. Our domain has so much cross-cutting concerns that it has more chance than others of becoming a truly multidisciplinary science. More-so even perhaps than psychology. This forces us to be clear, even when we're not necessarily using the most precise words, and also at the same time to appreciate an ability to listen more attentively to others.

Hopefully ;-)

Your sincerely
Half Troll Julian.
david hussman
2011-06-06 05:02:05 UTC
Permalink
They have their payment story together. They will contact you will all the
arrangements. I think they cover flights and hotel and pay different fees
for different services (e.g. teaching a course verses giving a talk). I may
be off base if you are only giving a talk because I have always done the
combo deal.

I suggest you ping Lee directly. Unlike other conference organizers we know,
he will respond.

-----Original Message-----
From: fonc-***@vpri.org [mailto:fonc-***@vpri.org] On Behalf Of
Michael Forster
Sent: Friday, June 03, 2011 1:19 PM
To: Fundamentals of New Computing
Subject: Re: [fonc] Alternative Web programming models?

On Fri, Jun 3, 2011 at 12:58 PM, C. Scott Ananian <***@laptop.org> wrote:
[...]
The web is not *only* an OS.  It also provides the backing data for a
very large unstructured database.  Google of course realize this, as
their company rests on a search engine.  The semantic web folks have
tried in vain to get people to add more structure to the database.
What the "web OS" must do is allow the efficient export of additional
*unstructured and ad hoc* data.  HTML+CSS web applications today are
[...]

Sorry for the tanget, but there is no such thing as an "unstructured
database." Whether talking about the logical or physical level, a database
is a specfication of data structure (and constraints upon that data). Dr.
Kay once characterised computing as a "pop culture,"
and statements such as the above reflect that.

Regards,

Mike
david hussman
2011-06-06 05:17:34 UTC
Permalink
Please forgive this email.

-----Original Message-----
From: fonc-***@vpri.org [mailto:fonc-***@vpri.org] On Behalf Of
david hussman
Sent: Monday, June 06, 2011 12:02 AM
To: 'Fundamentals of New Computing'
Subject: RE: [fonc] Alternative Web programming models?

They have their payment story together. They will contact you will all the
arrangements. I think they cover flights and hotel and pay different fees
for different services (e.g. teaching a course verses giving a talk). I may
be off base if you are only giving a talk because I have always done the
combo deal.

I suggest you ping Lee directly. Unlike other conference organizers we know,
he will respond.

-----Original Message-----
From: fonc-***@vpri.org [mailto:fonc-***@vpri.org] On Behalf Of
Michael Forster
Sent: Friday, June 03, 2011 1:19 PM
To: Fundamentals of New Computing
Subject: Re: [fonc] Alternative Web programming models?

On Fri, Jun 3, 2011 at 12:58 PM, C. Scott Ananian <***@laptop.org> wrote:
[...]
The web is not *only* an OS.  It also provides the backing data for a
very large unstructured database.  Google of course realize this, as
their company rests on a search engine.  The semantic web folks have
tried in vain to get people to add more structure to the database.
What the "web OS" must do is allow the efficient export of additional
*unstructured and ad hoc* data.  HTML+CSS web applications today are
[...]

Sorry for the tanget, but there is no such thing as an "unstructured
database." Whether talking about the logical or physical level, a database
is a specfication of data structure (and constraints upon that data). Dr.
Kay once characterised computing as a "pop culture,"
and statements such as the above reflect that.

Regards,

Mike
Benoît Fleury
2011-06-03 21:36:38 UTC
Permalink
Hi Scott,

I tend to agree with you. The uniform interface of the web (reduced
set of HTTP verbs, links...) is what make all these applications
possible. We know what to do when we have the URL to the flickr image.
But we could do so much more.

A simple multi-media document definition language with a protocol to
manipulate these documents (similar to AtomPub but at a lower level,
maybe like MIME) would allow us to create much more powerful
applications. In particular, it would automatically give us addressing
of any part of a document. The different applications would not have
to define their own addressing scheme like it is the case today with
DOM, CSS, URL fragments...

Regarding the "graphics rendered in NativeClient", I don't think it
would necessarily require to have a web service alongside of the
application to make the data available to other services. Your
application (UI part) can be built on top of the structured document
model and protocol mentioned above. A set of "view rules" would render
the output of a process (structured data) to the user and transform
user input into a set of commands in the underlying uniform protocol.
Of course, these view rules would also be documents that can be
managed using this same protocol. I imagine this architecture as a
gigantic mesh of independent processes exchanging structured data
using a uniform protocol. As a user, I can change the state of the
mesh by observing and changing data at some points of the mesh.

-- benoit
Post by Alan Kay
There are lots of egregiously wrong things in the web design. Perhaps one of
the simplest is that the browser folks have lacked the perspective to see
that the browser is not like an application, but like an OS. i.e. what it
really needs to do is to take in and run foreign code (including low level
code) safely and coordinate outputs to the screen (Google is just starting
to realize this with NaCl after much prodding and beating.)
The web is not *only* an OS.  It also provides the backing data for a
very large unstructured database.  Google of course realize this, as
their company rests on a search engine.  The semantic web folks have
tried in vain to get people to add more structure to the database.
What the "web OS" must do is allow the efficient export of additional
*unstructured and ad hoc* data.  HTML+CSS web applications today are
moderately good at this -- the images stored in flickr (say) are still
in standard crawlable formats, and they show up in search results.
Google's Native Client (as well as prior sandbox technologies, such as
Java, etc) is *not* good at this (yet?).  The graphics rendered in
NativeClient are completely invisible to search engines -- and thus
resources created in these apps are impossible to index.  You can
build a web app *alongside* Native Client in order to export data
created in the sandboxed app -- but now you're just doubling the
effort.
Like it or not, the messy stew of HTML+CSS is the closest we have to a
universal GUI canvas, loaded with equally-messy semantics -- but
enough that I can take the source code for a (say) flickr or youtube
page and extract the comment text and photos/video.  No rival "web
application" or "web as OS" framework (which is not itself built on
HTML+CSS) can do that.
 --scott
ps. the closest rival to HTML is RSS/Atom -- a more limited format,
but it had it's own search engines and tools (see
http://www.opensearch.org).  A "web OS" which could still export its
underlying data structures/files as a indexable/sharable/browsable RSS
feed would be more competitive than a pure sandbox. Another possible
avenue is exporting from your "web OS" something close to a
"filesystem" -- ie, a list of downloadable "documents", nothing more
-- and letting the search engine "peek inside" the standard format of
the documents to construct an index, as Google can currently do with
PDF, XLS, and other common file formats.  But this gives up the idea
of the 'hyperlink' -- now I know there's a binary blob somewhere with
information relevant to my search, but it comes without any code to
edit/view/collaborate.  (For a little more detail on the "export as
RSS" aspect of this, you might check out "The Journal, Reloaded" at
http://cscott.net/Publications/)
--
      ( http://cscott.net )
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
C. Scott Ananian
2011-06-03 23:48:37 UTC
Permalink
Post by Benoît Fleury
I tend to agree with you. The uniform interface of the web (reduced
set of HTTP verbs, links...) is what make all these applications
possible. We know what to do when we have the URL to the flickr image.
But we could do so much more.
I agree with you there. We can certainly continue to make things better!

But (as the HTML folk learned w/ the success of HTML5) the way to go
forward is to *build on* the past, not throw it out and start over.
If we can come up with new technologies that interoperate, that
provide backward-compatibility hooks, and that provide transition
paths, then they have a chance to improve the platform that everyone
is using.

The future has turned out to be much messier than past architects
imagined. We have neither solved "strong AI" nor have we convinced
humans to generate content which adheres to rigid semantics. Instead
we have a strange new world where "unstructured databases" can exist,
often can be searched, and "most likely" will generate useful results.
Probabilistic models have triumphed over formal correctness.

And if you think that's horrifying, you should see some of the work my
former advisor is/was doing on "failure-oblivious computing"
(http://people.csail.mit.edu/rinard/acceptability_oriented_computing/).
Rather shockingly, the best thing to do is often to "muddle
through"...

But now we've drifted far off topic; all of this is rather orthogonal
to the goal of making systems which people can more broadly
understand, use, and modify. Having a beautiful small kernel is a
good way to make an understandable system; throwing in everything but
the kitchen sink is a good way to make an unobjectionable system which
gets adopted easily by lots of different people with strong
preexisting ideas about the way things ought to work. I would imagine
the fonc project would want to steer a middle course. (Alan has
previously spoken of the need for extensibility, which is one way to
build a popular featureful system out of a small kernel.)
--scott
--
      ( http://cscott.net )
Reuben Thomas
2011-06-03 18:28:16 UTC
Permalink
Post by Jonas Pfenniger (zimbatm)
Relatedly, what struck me while following the HTML5 spec development
is how they decided to specify existing browser behavior and add more
to the plate. Instead, they could have tried to decompose existing
elements to a smaller subset that would be more easily documented.
It's worth remembering that this was intentional with HTML5: it was a
rebellion against the changes of XHTML by those who wanted backwards
compatibility which, in the context of the web, is hard to deny.
--
http://rrt.sc3d.org
Josh Gargus
2011-06-09 07:56:29 UTC
Permalink
Post by Alan Kay
Hi Cornelius
There are lots of egregiously wrong things in the web design. Perhaps one of the simplest is that the browser folks have lacked the perspective to see that the browser is not like an application, but like an OS. i.e. what it really needs to do is to take in and run foreign code (including low level code) safely and coordinate outputs to the screen (Google is just starting to realize this with NaCl after much prodding and beating.)
I think everyone can see the implications of these two perspectives and what they enable or block
Some of the implications, anyway. The benefits of the OS-perspective are clear. Once it hits its stride, there will be no (technical) barriers to deploying the sorts of systems that we talk about here (Croquet-Worlds-Frank-OMeta-whatnot). Others will be doing their own cool things, and there will be much creativity and innovation.

However, elsewhere in this thread it is noted that the HTML-web is structured-enough to be indexable, mashupable, and so forth. It makes me wonder: is there a risk that the searchability, etc. of the web will be degraded by the appearance of a number of mutually-incompatible better-than-HTML web technologies? Probably not... in the worst case, someone who wants to be searchable can also publish in the "legacy" format.

However, can we do better than that? I guess the answer depends on which aspect of the status quo we're trying to improve on (searchability, mashups, etc). For search, there must be plenty of technologies that can improve on HTML by decoupling search-metadata from presentation/interaction (such as OpenSearch, mentioned elsewhere in this thread). Mashups seem harder... maybe it needs to happen organically as some of the newly-possible systems find themselves converging in some areas.

But I'm not writing because I know the answers, but rather the opposite. What do you think?

Cheers,
Josh
Post by Alan Kay
Cheers,
Alan
Sent: Tue, May 31, 2011 7:16:20 AM
Subject: Re: [fonc] Alternative Web programming models?
Thanks Merik,
I've read/watch the OOPSLA'97 keynote before, but hadn't seen the first video.
I'm having problems with the first one(the talk at UIUC). Has anyone been able to watch past the first hour. I get up to the point where Alex speaks and it freezes.
I've just recently read Roy Fielding's dissertation on the architecture of the Web. Two prominent features of web architecture are the (1) client-server hierarchical style and (2) the layering abstraction style. My take away from that is how all of abstraction layers of the web software stack get in the way of the applications that want to use the machine. Style 1 is counter to the notion of the 'no centers' principle and is very limiting when you consider different classes of applications that might involve many entities with ill-defined relationships. Style 2, provides for separation of concerns and supports integration with legacy systems, but incurs so much overhead in terms of structural complexity and performance. I think the stuff about web sockets and what was discussed in the Erlang interview that Micheal linked to in the 1st reply is relevant here. The web was designed for large grain interaction between entities, but many application domain problems don't map to that. Some people just want pipes or channels to exchange messages for fine-grained interactions, but the layer cake doesn't allow it. This is where you get the feeling that the architecture for rich web apps is no-architecture, just piling big stones atop one another.
I think it would be very interesting for someone to take the same approach to networked-based application as Gezira did with graphics (or the STEP project in general) as far assessing what's needed in a modern Internet-scale hypermedia architecture.
[1] Alan Kay, How Complex is "Personal Computing"?". Normal" Considered Harmful. October 22, 2009, Computer Science department at UIUC.
http://media.cs.uiuc.edu/seminars/StateFarm-Kay-2009-10-22b.asx
(also see http://www.smalltalk.org.br/movies/ )
[2] Alan Kay, "The Computer Revolution Hasn't Happened Yet", October 7, 1997, OOPSLA'97 Keynote.
Transcript http://blog.moryton.net/2007/12/computer-revolution-hasnt-happened-yet.html
Video http://ftp.squeak.org/Media/AlanKay/Alan%20Kay%20at%20OOPSLA%201997%20-%20The%20computer%20revolution%20hasnt%20happened%20yet.avi
(also see http://www.smalltalk.org.br/movies/ )
Merik
All,
A criticism by Dr. Kay, has really stuck with me. I can't remember the specific criticism and where it's from, but I recall it being about the how wrong the web programming model is. I imagine he was referring to how disjointed, resource inefficient it is and how it only exposes a fraction of the power and capability inherent in the average personal computer.
So Alan, anyone else,
what's wrong with the web programming mode and application architecture? What programming model would work for a global-scale hypermedia system? What prior research or commercial systems have any of these properties?
The web is about the closest we've seen to a ubiquitous deployment platform for software, but the confluence of market forces and technical realities endanger that ubiquity because users want full power of their devices plus the availability of Internet connectivity.
-Cornelius
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
BGB
2011-06-09 09:04:42 UTC
Permalink
Post by Josh Gargus
Post by Alan Kay
Hi Cornelius
There are lots of egregiously wrong things in the web design. Perhaps
one of the simplest is that the browser folks have lacked the
perspective to see that the browser is not like an application, but
like an OS. i.e. what it really needs to do is to take in and run
foreign code (including low level code) safely and coordinate outputs
to the screen (Google is just starting to realize this with NaCl
after much prodding and beating.)
I think everyone can see the implications of these two perspectives
and what they enable or block
Some of the implications, anyway. The benefits of the OS-perspective
are clear. Once it hits its stride, there will be no (technical)
barriers to deploying the sorts of systems that we talk about here
(Croquet-Worlds-Frank-OMeta-whatnot). Others will be doing their own
cool things, and there will be much creativity and innovation.
However, elsewhere in this thread it is noted that the HTML-web is
structured-enough to be indexable, mashupable, and so forth. It makes
me wonder: is there a risk that the searchability, etc. of the web
will be degraded by the appearance of a number of
mutually-incompatible better-than-HTML web technologies? Probably
not... in the worst case, someone who wants to be searchable can also
publish in the "legacy" format.
However, can we do better than that? I guess the answer depends on
which aspect of the status quo we're trying to improve on
(searchability, mashups, etc). For search, there must be plenty of
technologies that can improve on HTML by decoupling search-metadata
from presentation/interaction (such as OpenSearch, mentioned elsewhere
in this thread). Mashups seem harder... maybe it needs to happen
organically as some of the newly-possible systems find themselves
converging in some areas.
But I'm not writing because I know the answers, but rather the
opposite. What do you think?
hmm... it is a mystery....

actually, possibly a relevant question here, would be why Java applets
largely fell on their face, but Flash largely took off (in all its uses
from YouTube to "Punch The Monkey"...).

but, yeah, there is another downside to deploying ones' technology in a
browser:
writing browser plug-ins...


and, for browser-as-OS, what exactly will this mean, technically?...
network-based filesystem and client-side binaries and virtual processes?...
like, say, if one runs a tiny sand-boxed Unix-like system inside the
browser, then push or pull binary files, which are executed, and may
perform tasks?...

could be interesting though, as then a "tab" can be either an open page
or document, or a running application. hopefully, these could be nicer
to target, and more capable, than either Flash or Java Applets, although
probably would require some sort of VM.

"NaCl" is not a perfect solution, if anything, because, say, x86 NaCl
apps don't work on x86-64 or ARM. nicer would be able to be able to run
it natively, if possible, or JIT it to the native ISA if not.

I did my own x86-based VM before, which basically just ran x86 in an
interpreted (via translation to threaded code) environment. technically,
I just sort of did a basic POSIX-like architecture, albeit I used
PE/COFF for binaries and libraries (compiled via MinGW...).
it was written in such a way that likely it wont care what the host
architecture is (it was all plain C, with no real ASM or
architecture-specific hacks...).

so, I guess, if something like this existed inside a browser, and was
isolated from the host OS?...

in my case, I wrote the VM and soon realized I personally had no
particular use for it...


and, meanwhile, for my own site, it is generally plain HTML...
I did basic CGI-scripts before as a test, but couldn't think up much to
use them for (I don't personally really do much of anything that really
needs CGI-scripts).

about the most I would likely do with it would be to perform simple
surveys, say, a form that is like:
"favorite programming language?", "MBTI type?", ...
then I could analyze the results to conclude which types of personality
are more associated with being a programmer, and which prefer which
sorts of programming languages, ...

for example... how common are other xSTP (ISTP or ESTP) programmers, and
how many like C?...


in general though, I use HTML for much of my documentation, but
generally because it is currently one of the least-effort ways to
provide structured and formatted documentation and have it be readily
accessible (online or offline).

at least, currently I use SeaMonkey Composer, which is not that much
more effort than using a word-processor, and IMO a little less silly in
terms of how it behaves (vs Word or OpenOffice Writer which seem at
times brain-damaged...). not that Composer is perfect either though.

for editing documentation, a WYSIWYG editor works fine, since ones' goal
is more to just produce generally formatted and structured text, rather
than needing much fine-grained control.

although, if possible, something more Wiki-like could be nicer still,
but WikiML lacks any real WYSIWYG editors AFAICT, and would need to be
converted to HTML prior to passing off to a client or web-browser,
creating a problem for local offline display (one either needing a
specialized viewer, or a local webserver daemon, with the browser as the
UI).

so, flat HTML would seem to be the least effort strategy.


I have before considered the possibility of using a WikiML variant in
documentation comments, as an alternative to Javadoc/Doxygen style
documentation comments (or XML documentation comments...). sadly, I
never got around to this...

I have an older documentation system, but it sucked and has long since
fallen into disuse (too much information needed to be present in the
comments, ...).


or such...
Julian Leviston
2011-06-09 09:58:12 UTC
Permalink
actually, possibly a relevant question here, would be why Java applets largely fell on their face, but Flash largely took off (in all its uses from YouTube to "Punch The Monkey"...).
My own opinion of this is the same reason that the iPad feels faster than a modern desktop operating system: it was quicker to get to the user interaction point.

Julian.
C. Scott Ananian
2011-06-10 17:24:51 UTC
Permalink
Post by Julian Leviston
actually, possibly a relevant question here, would be why Java applets largely fell on their face, but Flash largely took off (in all its uses from YouTube to "Punch The Monkey"...).
My own opinion of this is the same reason that the iPad feels faster than a modern desktop operating system: it was quicker to get to the user interaction point.
Julian.
The slow startup time is indeed why Sun's own team felt that Java
failed to get traction.  Unfortunately some basic architecture choices
in the standard library made it extremely difficult to fix the
problem.  (Broadly, how class initialization is handled and used.) The
latest Java plugins contain some impressive engineering to mitigate
the problem, but Java is still sluggish to start in my experience.

Library design matters!
 --scott
--
      ( http://cscott.net )
BGB
2011-06-10 18:45:30 UTC
Permalink
Post by C. Scott Ananian
Post by Julian Leviston
actually, possibly a relevant question here, would be why Java applets largely fell on their face, but Flash largely took off (in all its uses from YouTube to "Punch The Monkey"...).
My own opinion of this is the same reason that the iPad feels faster than a modern desktop operating system: it was quicker to get to the user interaction point.
Julian.
The slow startup time is indeed why Sun's own team felt that Java
failed to get traction. Unfortunately some basic architecture choices
in the standard library made it extremely difficult to fix the
problem. (Broadly, how class initialization is handled and used.) The
latest Java plugins contain some impressive engineering to mitigate
the problem, but Java is still sluggish to start in my experience.
Library design matters!
--scott
yep... Java does a lot of things in the library which could generally be
considered a bad thing for performance.

one thing that took my attention before was noting just how many cases
there are of operations taking the form:
"new Something(...).someMethod(...);".

in places which could potentially impact performance if one is not
careful (at least presuming the VM doesn't infer that the object is
one-off and micro-optimize it some).


just had an idle thought of a JVM starting up as a prebuilt "image"
(say, methods are pre-JIT'ed and pre-linked, static fields are
pre-initialized, ...).

unless of course, they already started doing this (sadly, I am not much
an expert on the JVM).

AFAIK, the main strategy generally used is to demand-load classes
one-at-a-time from the JAR files, potentially validate them, JIT their
methods, and call the "<sinit>" or similar static method. this process
then may partly play out recursively, each class causing demand-loading
more classes, ... until all relevant classes are loaded.

or, maybe it is that, within the class library, most classes are tangled
up with many other classes, essentially turning the class library into
some large bulk-loaded glob?...


well, not that my VM can say that much... sadly, in the present
architecture, the toplevel is executed, which may in-turn load more code
(the toplevel is itself a giant static initializer, and any
functions/classes/... are built when their code is executed).

a sad result of this is that using executable statements/expressions in
the toplevel, it is possible to fetch 'null' from slots prior to them
being initialized, creating ordering issues in some cases.

say:
var foo=[bar, "bar"];
function bar() { ... }
foo[0] will currently hold null...

this is not an issue for normal calls though, since the call will
generally happen following any needed functions having been assigned to
their slots.

addressing the issue at present would likely require ugly hacks.


or such...
C. Scott Ananian
2011-06-10 19:17:27 UTC
Permalink
just had an idle thought of a JVM starting up as a prebuilt "image" (say,
methods are pre-JIT'ed and pre-linked, static fields are pre-initialized,
...).
unless of course, they already started doing this (sadly, I am not much an
expert on the JVM).
Yes, they do this already; but this was part of the startup-time
optimizations which weren't initially present in the JVM.
or, maybe it is that, within the class library, most classes are tangled up
with many other classes, essentially turning the class library into some
large bulk-loaded glob?...
Yes, that was a large part of the initial problem. A particular
culprit was the String class, which wanted to have lots of convenient
hooks for different features (including locale-specific stuff like
uppercase/lowercase & sorting, regexs, etc) but that tangled it up
with everything else. The basic JVM bytecode demanded intern'ed
strings, which required String class initialization, which then
required on-the-fly locale loaded based on environment variables
(removing the ability to precompile and image), and things went
rapidly downhill from there.
a sad result of this is that using executable statements/expressions in the
toplevel, it is possible to fetch 'null' from slots prior to them being
initialized, creating ordering issues in some cases.
Yes, this was another fundamental problem with the JVM. You needed
intimate knowledge of the library implementation in order to do
initial class initialization in the proper order to avoid crashes.
--scott
--
      ( http://cscott.net )
BGB
2011-06-10 20:00:48 UTC
Permalink
Post by C. Scott Ananian
just had an idle thought of a JVM starting up as a prebuilt "image" (say,
methods are pre-JIT'ed and pre-linked, static fields are pre-initialized,
...).
unless of course, they already started doing this (sadly, I am not much an
expert on the JVM).
Yes, they do this already; but this was part of the startup-time
optimizations which weren't initially present in the JVM.
yes, ok.
Post by C. Scott Ananian
or, maybe it is that, within the class library, most classes are tangled up
with many other classes, essentially turning the class library into some
large bulk-loaded glob?...
Yes, that was a large part of the initial problem. A particular
culprit was the String class, which wanted to have lots of convenient
hooks for different features (including locale-specific stuff like
uppercase/lowercase& sorting, regexs, etc) but that tangled it up
with everything else. The basic JVM bytecode demanded intern'ed
strings, which required String class initialization, which then
required on-the-fly locale loaded based on environment variables
(removing the ability to precompile and image), and things went
rapidly downhill from there.
I noted this before...

in constrast, my VM has strings as a built-in type, partly due to some
of the nastiness I had observed in the JVM, and partly because having
them as a built-in types allowed a more efficient implementation (I
store strings directly into in-memory globs of characters, rather than
needing instances and arrays, which would eat a fair number of
additional bytes per-string...).
Post by C. Scott Ananian
a sad result of this is that using executable statements/expressions in the
toplevel, it is possible to fetch 'null' from slots prior to them being
initialized, creating ordering issues in some cases.
Yes, this was another fundamental problem with the JVM. You needed
intimate knowledge of the library implementation in order to do
initial class initialization in the proper order to avoid crashes.
yeah, sadly, it is also an issue in my VM, with no real good/obvious fix.


granted... one could just be like "ok, don't use any executable
expressions outside function/method scope" (possibly making the compiler
complain, basically, like in C and C++) but this is overly limiting.

the least effort route has been to allow this, at the cost that code
will have to pay attention to any initialization-order dependencies.
Ian Piumarta
2011-06-11 00:21:11 UTC
Permalink
Post by Julian Leviston
reason that the iPad feels faster than a modern desktop operating system: it was quicker to get to the user interaction point.
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.131.1805

"The responsiveness of exploratory programming environments (such as the Smalltalk programming environment) allows the programmer to concentrate on the task at hand rather than being distracted by long pauses caused by compilation or linking."
BGB
2011-06-11 05:40:12 UTC
Permalink
Post by Ian Piumarta
Post by Julian Leviston
reason that the iPad feels faster than a modern desktop operating system: it was quicker to get to the user interaction point.
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.131.1805
"The responsiveness of exploratory programming environments (such as the Smalltalk programming environment) allows the programmer to concentrate on the task at hand rather than being distracted by long pauses caused by compilation or linking."
this is also partly where dynamic script loading and eval can be nifty...


say, one is using an app, and then in the console they type in a
command, say:
;load("scripts/myscript.bs");

and can quickly edit the file, hit the uparrow in the console to
re-enter the prior command, and observe the results.

or, the ability to directly type script commands into the console to
observe results, ...

for example, in the above, the ";load(...);" line actually is a string
passed to eval (the initial ';' basically being an "eval this" marker,
mostly as it was shorter to type than "eval", and also less awkward than
having to change console modes between "eval mode" and "shell mode").
the evaluated fragment the loads and compiles/executes the indicated
script file.

all of these things can be convenient.

although, sadly, the need to regularly rebuild ones' codebase is not a
difficult thing to escape (fragment evaluation, sadly, ends up mostly
just being used for testing and controlling things, rather than a
programming strategy in its own right).


but, I was also left recently thinking some about the possible
"strangeness" of me basically creating a vaguely Lisp-like programming
environment within C.

sadly, one can forget how alien this may seem to some people, and wonder
why "a pointer which remembers its value's data type" is apparently such
a difficult concept to explain (nevermind all the various things one can
do with cons-cells and lists...).

after around a decade of programming this way, it starts to seem just as
natural as using static types, and the relative pain of "malloc()" and
"free()" gradually starts to become a distant memory...

or such...
C. Scott Ananian
2011-06-12 01:30:20 UTC
Permalink
Post by BGB
Post by Ian Piumarta
"The responsiveness of exploratory programming environments (such as the
Smalltalk programming environment) allows the programmer to concentrate on
the task at hand rather than being distracted by long pauses caused by
compilation or linking."
this is also partly where dynamic script loading and eval can be nifty...
say, one is using an app, and then in the console they type in a command,
;load("scripts/myscript.bs");
and can quickly edit the file, hit the uparrow in the console to re-enter
the prior command, and observe the results.
or, the ability to directly type script commands into the console to observe
results, ...
You should spend some time playing around with the Web Inspector in
Chrome or other Webkit browser. Console, live code editing, lots of
other good stuff. The only big drawback is the complexity of the
system: HTML+CSS+JS is quite a hairy beast.
Post by BGB
but, I was also left recently thinking some about the possible "strangeness"
of me basically creating a vaguely Lisp-like programming environment within
C.
http://en.wikipedia.org/wiki/Greenspun's_Tenth_Rule
--scott
--
      ( http://cscott.net )
BGB
2011-06-12 03:11:39 UTC
Permalink
Post by C. Scott Ananian
Post by BGB
Post by Ian Piumarta
"The responsiveness of exploratory programming environments (such as the
Smalltalk programming environment) allows the programmer to concentrate on
the task at hand rather than being distracted by long pauses caused by
compilation or linking."
this is also partly where dynamic script loading and eval can be nifty...
say, one is using an app, and then in the console they type in a command,
;load("scripts/myscript.bs");
and can quickly edit the file, hit the uparrow in the console to re-enter
the prior command, and observe the results.
or, the ability to directly type script commands into the console to observe
results, ...
You should spend some time playing around with the Web Inspector in
Chrome or other Webkit browser. Console, live code editing, lots of
other good stuff. The only big drawback is the complexity of the
system: HTML+CSS+JS is quite a hairy beast.
yeah...

my current strategy already involves some amount of typing commands into
the console...
as noted in the other post, the main role that "load(...)" serves, is
that I am limited to about 100 characters at a time I can type into the
console, which is a limitation.

edit/reload/go is more of a compromise...
Post by C. Scott Ananian
Post by BGB
but, I was also left recently thinking some about the possible "strangeness"
of me basically creating a vaguely Lisp-like programming environment within
C.
http://en.wikipedia.org/wiki/Greenspun's_Tenth_Rule
--scott
except, in my case, there is a more direct reason:
long ago, I had messed around with Scheme, and implemented a Scheme VM;
many Scheme-like facilities and practices have managed to somewhat
outlive the original VM, but were just sort of kludged back onto C, and
became a part of the baseline coding practice.

or such...
Julian Leviston
2011-06-12 01:59:26 UTC
Permalink
Post by BGB
this is also partly where dynamic script loading and eval can be nifty...
;load("scripts/myscript.bs");
and can quickly edit the file, hit the uparrow in the console to re-enter the prior command, and observe the results.
Yes, but this method of programming sucks, severely. This is why I am wholeheartedly happy that FoNC are on the case. Unfortunately I don't know of any other group of people who have both the insight and capability to do this research.

Decades later and the Smalltalk and Self systems are STILL some of the easiest environments to discover the intention of programmers, and to create new models and expressions in code. Almost needless to say is that even though this is the case, these systems are severely lacking. Who am I to judge, that I have produced nothing that "doesn't suck"? I'm at present simply someone who is calling it how I see it. All development begins with the awareness of a "lack". ;-) A real need in the beholder.
BGB
2011-06-12 03:00:38 UTC
Permalink
Post by Julian Leviston
Post by BGB
this is also partly where dynamic script loading and eval can be nifty...
;load("scripts/myscript.bs");
and can quickly edit the file, hit the uparrow in the console to re-enter the prior command, and observe the results.
Yes, but this method of programming sucks, severely. This is why I am wholeheartedly happy that FoNC are on the case. Unfortunately I don't know of any other group of people who have both the insight and capability to do this research.
yes, but reloading as the app runs, is still better convenience-wise
than its main alternative:
go over to Bash or CMD;
hit up-arrow, which re-summons a command like, say, "make -f Makefile.msvc";
wait for several minutes (as the C or C++ compiler rebuilds everything,
doing a big recursive makefile walk), maybe go and use bathroom or get
coffee;
start up app, and test whatever changes were made;
edit source files some more;
repeat...

the main advantage is that, often, the in-program "load" command may
only take maybe a matter of milliseconds, or a few seconds for a large
mess of scripts, and so is a good deal faster, and doesn't require
exiting/restarting the app, but as a drawback, does not cover
statically-built parts of the app.

the analogy would be doing a page-load in FireFox, vs going and exiting
and rebuilding FireFox, to update the page...


also, one can type commands interactively into the console, but the
downside of this is that it is not nearly as effective (the console
isn't very good for entering large/complex fragments), as well as
generally anything entered into the console is forgotten when the app
exits/reloads.


"auto-reload on save" could also make sense...

for example, in my 3D engine, it is possible to dynamically alter the
map geometry as the program is being run (this is how my mapper works),
however, the "trick" is that the world representation is not itself
nearly so dynamic, and in-fact, most of the "dynamic" aspects can be
attributed to me using a very "quick and dirty" strategy to rebuilt the
BSP tree (some years back, I had observed that QuickSort could be used
as the basis of an O(n log2 n) BSP-rebuild algorithm, vs a more
traditional O(n^3) or so BSP algorithm...).


a sufficiently fast dynamic compiler could recompile and relink the code
whenever the user makes changes, and then saves them...

I actually originally intended something like this with my "dynamic C
compiler" project, but discovered that my C compiler was far too slow
and buggy with this, and often attempting something like this was more
liable to blow up in ones' face than work correctly (there are many ugly
issues with hot-patching running C code).

I later concluded that C was not really the ideal sort of language for
this sort of thing.

a partial compromise could be a "fast reload" key, where one hits a key
to cause the VM to automatically reload any loaded scripts (without
having to re-enter a console command). the downside is that one would
have to keep track of any loaded modules as to force-reload them.

hmm...


either way, it is better than a full rebuild with "make", since this
generally requires fully exiting and restarting the program as well, in
addition to any delays related to rebuilding.
Post by Julian Leviston
Decades later and the Smalltalk and Self systems are STILL some of the easiest environments to discover the intention of programmers, and to create new models and expressions in code. Almost needless to say is that even though this is the case, these systems are severely lacking. Who am I to judge, that I have produced nothing that "doesn't suck"? I'm at present simply someone who is calling it how I see it. All development begins with the awareness of a "lack". ;-) A real need in the beholder.
image-based systems have their own sets of drawbacks though...

dynamic reload could be a "good enough" compromise IMO, if done well...


or such...
Julian Leviston
2011-06-13 08:33:29 UTC
Permalink
Post by BGB
image-based systems have their own sets of drawbacks though...
dynamic reload could be a "good enough" compromise IMO, if done well...
I don't follow this train of thought. Everything runs in "an image". That's to say, the source code directly relates to some piece of running code in the system at some point. Smalltalk, Self and the like simply let you interact with the running code in the same place as the artefacts that create the running code. It's akin to programming in a debugger that saves the contents of memory constantly as "the source".

As it's 2011, surely we can come to a point where we can synthesise these two "apparently" orthogonal concerns?

I think the main issue with smalltalk-like "image" systems is that the system doesn't as easily let you "start from blank" like text-file source-code style coding does... thats to say, yes, it's possible to start new worlds, but it's not very easy to reference "bits" of your worlds from each other...

and essentially, that's what text-file coding (ie editing "offline" code) does for us... because things are in files, it's easy to "include" a file as one packaged unit, or a group of file, or a "package"... and then that "package" can be referred to... separately, and even maintained by someone else, and it's not a COPY of the package, it's a reference to it... you know? This is incredibly powerful.

The equivalent in a smalltalk system would need to be some kind of amazing version control system that can version worlds at certain points, and package code in a recursive encapsulation process. Having a "global namespace" is kind of retarded... because context is everything...

... that's to say, when and as context yields meaning (as I believe it does from my fairly deep ponderings), no "token" that yields meaning in a given context holds its meaning when decontextualised in the same way, therefore names (as these "tokens") are deeply important IN CONTEXT. What kind of relevance, therefore, has a global namespace got?

Julian.
BGB
2011-06-13 09:50:29 UTC
Permalink
Post by Julian Leviston
Post by BGB
image-based systems have their own sets of drawbacks though...
dynamic reload could be a "good enough" compromise IMO, if done well...
I don't follow this train of thought. Everything runs in "an image". That's to say, the source code directly relates to some piece of running code in the system at some point. Smalltalk, Self and the like simply let you interact with the running code in the same place as the artefacts that create the running code. It's akin to programming in a debugger that saves the contents of memory constantly as "the source".
except, that traditional source-files have a "concrete" representation
as so many files, and, beyond these files, there is nothing really of
relevance (at least, conceptually, a person could print a program to
paper, re-type it somewhere else, and expect the result to work).

does it rebuild from source? does the rebuilt program work on the target
systems of interest? if so, then everything is good.


an image based system, OTOH, often means having to drag around the image
instead, which may include a bunch of "other stuff" beyond just the raw
text of the program, and may couple the program and the particular
development environment used to create it.

this coupling may be somewhat undesirable, and it is preferable to have
the source as files, so then one can "rebuild from source" whenever this
is needed.

also, another risk is of the development image becoming "polluted" as a
result of prior actions or code, which may risk compromising a project.


granted, the one major drawback of traditional files-based development
is its traditional dependency on the "edit/compile/run" cycle. scripting
languages can often make this cycle a little faster, but don't
necessarily eliminate it.


however, "dynamic reload" can stay with using plain text files (thus
allowing restarting clean if/when needed), and preserving familiar
aspects of "the coding process" (namely, having ones' code organized
into a tree of source files), but allows some of the merits of
live-system coding, such as the ability to quickly load their changes
back into the app, without having to exit and restart the program...

in this case, the currently running program image is partially
malleable, and so is subject to hot-patching (within sane limits), so
changes to the source can be reflected more immediately (no app restart).

however, unlike full image-based development, the app will generally
"forget" everything that was going on once it is exited and restarted.


by analogy, it is like running programs in Windows:
one can open/close/run programs, edit things, ... in Windows, and so
long as it is running, it will remember all this;
but, if/when Windows is rebooted, it will forget, and one starts again
with a "clean slate" of sorts (an empty desktop with only their icons
and start-up programs to greet them...).

but, the merit of rebooting Windows is that it keeps the system "clean",
as running Windows continuously is prone to cause it to degrade over
time, and without an occasional reboot will begin to manifest lots of
buggy behavior and eventually crash (IME, much worse in Vista and Win7
than in XP...).
Post by Julian Leviston
As it's 2011, surely we can come to a point where we can synthesise these two "apparently" orthogonal concerns?
I think the main issue with smalltalk-like "image" systems is that the system doesn't as easily let you "start from blank" like text-file source-code style coding does... thats to say, yes, it's possible to start new worlds, but it's not very easy to reference "bits" of your worlds from each other...
yes. for many types of project though, this is a potential deal-breaker.


another part of the matter may be that of dealing with different
libraries being updated independently by different developers.

I am not certain how well either image-based development would map to
traditional team-based development practices (say, where 5 or 10 people
are assigned to work on a particular component, and 5 or 10 others
working mostly independently are assigned to an adjacent component, ...).

granted, I may be wrong here, as I haven't done a whole lot of
development with image-based systems.
Post by Julian Leviston
and essentially, that's what text-file coding (ie editing "offline" code) does for us... because things are in files, it's easy to "include" a file as one packaged unit, or a group of file, or a "package"... and then that "package" can be referred to... separately, and even maintained by someone else, and it's not a COPY of the package, it's a reference to it... you know? This is incredibly powerful.
yep.

I am generally mostly in favor of using files.
Post by Julian Leviston
The equivalent in a smalltalk system would need to be some kind of amazing version control system that can version worlds at certain points, and package code in a recursive encapsulation process. Having a "global namespace" is kind of retarded... because context is everything...
... that's to say, when and as context yields meaning (as I believe it does from my fairly deep ponderings), no "token" that yields meaning in a given context holds its meaning when decontextualised in the same way, therefore names (as these "tokens") are deeply important IN CONTEXT. What kind of relevance, therefore, has a global namespace got?
well, it is a tradeoff...

global namespaces make a lot of sense for file-based development though,
as then everything can be identified as belonging in a certain place and
having a certain scope, so namespaces can serve as an organization system.


the downside though is that not always does one want all code to have
access to the same stuff.

for example, one may want to run code in a sandbox where only certain
namespaces are visible, but sadly there is no "ideal" way to do this
that I have found...

my VM does support multiple root/toplevel objects (all
packages/namespaces are relative to this root), where a given toplevel
may itself contain only a certain subset of the namespaces.

however, this is not generally used, and creates some awkward issues.

another partial compromise would be to support, essentially, symbolic
linking and "chroot" in ones' namespace system (sandboxed' code
essentially running with a "chroot" in effect, so it only sees packages
aliased into its local subset namespace).

the main alternative though would be introducing security-checking
and/or an ACL-like mechanism (scope-based ACL checks...).


or such...
Julian Leviston
2011-06-13 10:19:51 UTC
Permalink
Post by Julian Leviston
Post by BGB
image-based systems have their own sets of drawbacks though...
dynamic reload could be a "good enough" compromise IMO, if done well...
I don't follow this train of thought. Everything runs in "an image". That's to say, the source code directly relates to some piece of running code in the system at some point. Smalltalk, Self and the like simply let you interact with the running code in the same place as the artefacts that create the running code. It's akin to programming in a debugger that saves the contents of memory constantly as "the source".
except, that traditional source-files have a "concrete" representation as so many files, and, beyond these files, there is nothing really of relevance (at least, conceptually, a person could print a program to paper, re-type it somewhere else, and expect the result to work).
does it rebuild from source? does the rebuilt program work on the target systems of interest? if so, then everything is good.
an image based system, OTOH, often means having to drag around the image instead, which may include a bunch of "other stuff" beyond just the raw text of the program, and may couple the program and the particular development environment used to create it.
[SNIP]
or such...
This brings up an interesting point for me.

"Source" is an interesting word, isn't it? :) Source of what, exactly? Intention, right? The "real code" is surely the electricity inside the computer in its various configurations which represent numbers in binary. This is not textual streams, it's binary numbers. The representation is the interesting thing.... as are the abstractions that we derive from them.

I don't think computer programs being represented as text is very appropriate, useful or even interesting. in fact, I'd suffice to say that it's a definite hate/love relationship. I *love* typography, text and typing, but this has little or naught to do with programming. Programming is simply "done" in this way by me at the moment, begrudgingly because I have nothing better yet.

Consider what it'd be like if we didn't represent code as text... and represented it maybe as series of ideograms or icons (TileScript nod). Syntax errors don't really crop up any more, do they? Given a slightly nicer User Interface than tilescript, you could still type your code, (ie use the keyboard to fast-select tokens), but the computer won't "validate" any input that isn't in its "dictionary" of known possible syntactically correct items given whatever context you're in.

By the way, SmallTalk and Self are perfectly representable in textual forms... ("file out" nod) just like JVM bytecode is perfectly representable in textual form, or assembler... but text probably isn't the most useful way to interact with these things... just as to "edit your text" you most likely use some form of IDE (and yes, I'd class VIM or EMACS as an IDE).

Do I need to represent here just how idiotic I think compilation is as a process? It's a series of text stream processors that aim at building an artefact that has little or nothing to do with a world that exists entirely in text. TEXT!!! It's a bad way to represent the internal world of computers, in my opinion. It'd be nice to use a system which represents things a few layers closer to "what's actually going on", and surely the FoNC project is aimed at a pedagogical direction intending to strip away layers of cruft between the image inside the head of a "user" ( or programmer) that they have representing how it works, and how it actually works...

Mind you, I think human language is fairly silly, too... we communicate using mental "bubbles" of non-language based patterns, rendered into language, formed into text. It's well retarded... but this might be considered a little "out there", so I'll end here.

If I'm providing too much "noise" for the list, please anyone, let me know, and I'll be quiet.

Julian.
BGB
2011-06-13 20:02:01 UTC
Permalink
Post by Julian Leviston
Post by BGB
Post by Julian Leviston
Post by BGB
image-based systems have their own sets of drawbacks though...
dynamic reload could be a "good enough" compromise IMO, if done well...
I don't follow this train of thought. Everything runs in "an image".
That's to say, the source code directly relates to some piece of
running code in the system at some point. Smalltalk, Self and the
like simply let you interact with the running code in the same place
as the artefacts that create the running code. It's akin to
programming in a debugger that saves the contents of memory
constantly as "the source".
except, that traditional source-files have a "concrete"
representation as so many files, and, beyond these files, there is
nothing really of relevance (at least, conceptually, a person could
print a program to paper, re-type it somewhere else, and expect the
result to work).
does it rebuild from source? does the rebuilt program work on the
target systems of interest? if so, then everything is good.
an image based system, OTOH, often means having to drag around the
image instead, which may include a bunch of "other stuff" beyond just
the raw text of the program, and may couple the program and the
particular development environment used to create it.
[SNIP]
Post by BGB
or such...
This brings up an interesting point for me.
"Source" is an interesting word, isn't it? :) Source of what, exactly?
Intention, right? The "real code" is surely the electricity inside the
computer in its various configurations which represent numbers in
binary. This is not textual streams, it's binary numbers. The
representation is the interesting thing.... as are the abstractions
that we derive from them.
yes, but as a general rule, this is irrelevant...
the OS is responsible for keeping the filesystem intact, and generally
does a good enough job, and there one can backups and hard-copies
in-case things don't work out (say, a good hard crash, and the OS goes
and mince-meats the filesystem...).

as far as the user/developer can be concerned, it is all text.
more so, it is all ASCII text, given some of the inherent drawbacks of
using non-ASCII characters in ones' code...
Post by Julian Leviston
I don't think computer programs being represented as text is very
appropriate, useful or even interesting. in fact, I'd suffice to say
that it's a definite hate/love relationship. I *love* typography, text
and typing, but this has little or naught to do with programming.
Programming is simply "done" in this way by me at the moment,
begrudgingly because I have nothing better yet.
well, the issue is of course, that there is nothing obviously better.
Post by Julian Leviston
Consider what it'd be like if we didn't represent code as text... and
represented it maybe as series of ideograms or icons (TileScript nod).
Syntax errors don't really crop up any more, do they? Given a slightly
nicer User Interface than tilescript, you could still type your code,
(ie use the keyboard to fast-select tokens), but the computer won't
"validate" any input that isn't in its "dictionary" of known possible
syntactically correct items given whatever context you're in.
but, what would be the gain?... the major issue with most possible
graphical representations, is that they are far less compact. hence, the
common use of graphical presentations to represent a small amount in
information in a "compelling" way (say, a bar-chart or line-graph which
represents only a small number of data-points).

apparently, even despite this, some people believe in things like UML
diagrams, but given the time and effort required to produce them,
combined with their exceedingly low informational density, and I don't
really see the point.

also, for most programming tasks, graphical presentation would not offer
any real notable advantage over a textual representation.

at best, one has a pictographic system with a person new to the system
trying to figure out just what the hell all of these "intuitive" icons
mean and do. at that rate, one may almost as well just go make a
programming language based on the Chinese writing system.

given that most non-Chinese can't read Chinese writing, despite that
many of these characters do actually resemble crude line-art drawings of
various things and ideas.

and meanwhile, many Asian countries either have shifted to, or are in
the process of shifting to, the use of phonetic writing systems (Koreans
created Hangul, Kanji gradually erodes in favor of Hiragana, ...). even
in some places in China (such as Canton) the traditional writing system
is degrading, with many elements of their spoken dialect being
incorporated into the written language.

this could be taken as an indication that their may be some fundamental
flaw with pictographic or ideographic systems.

or, more directly:
many people new to icons-only GUI designs spend some time making use of
"tool tips" to decipher the meaning of the icon...
Post by Julian Leviston
By the way, SmallTalk and Self are perfectly representable in textual
forms... ("file out" nod) just like JVM bytecode is perfectly
representable in textual form, or assembler... but text probably isn't
the most useful way to interact with these things... just as to "edit
your text" you most likely use some form of IDE (and yes, I'd class
VIM or EMACS as an IDE).
JBC is not usually manipulated as text, since it is generally compiler
output.
however, textual JBC becomes far more useful if one needs to work
directly with the JBC, hence why an ASM syntax for JBC was later created
(initially by 3rd parties), despite Sun's original intention for none to
exist.

things like compiler output, and the bulk of mechanically
generated/processed information, generally falls into a "don't know,
don't care" category for the most part, and hence textual
representations are generally a lower priority.
Post by Julian Leviston
Do I need to represent here just how idiotic I think compilation is as
a process? It's a series of text stream processors that aim at
building an artefact that has little or nothing to do with a world
that exists entirely in text. TEXT!!! It's a bad way to represent the
internal world of computers, in my opinion. It'd be nice to use a
system which represents things a few layers closer to "what's actually
going on", and surely the FoNC project is aimed at a pedagogical
direction intending to strip away layers of cruft between the image
inside the head of a "user" ( or programmer) that they have
representing how it works, and how it actually works...
I really don't see the merit though of opposing it...


so long as the process is sufficiently fast, it shouldn't matter that
things are represented one way "over here", and some way very
differently "over there".


actually, this is a common practice in the use of "black box"
development methodologies, where often ones' data or program state is
represented in a number of different ways within different components,
as each component provides both an internal representation for the data,
and a set of external interfaces.

an example is would be an object or character in a 3D scene:
the user sees a unified entity, with all of its physics, its graphical
representation, its sound effects, its AI, ...

but, internally, there may be:
a split between the client and server, with only a small number of
data-points shared between them (say, where it is at, what model it is
using, which animation frame it is using, ...).

then, the server may delegate all the physics off to a physics engine,
which itself represents all of the physical properties of the object
(these are then shuffled back and forth over an API). so, the physics
engine has its own representation of an object, with things like an
inertia-tensor, lists of contact constraints, ...

the server doesn't know or care, it just gets a stream of data-points:
where the object is, how quickly it is moving, ...

meanwhile, the server doesn't know or care how it is handled on the
client (to the server, things like which 3D model it is using, ... are
represented as ASCII strings...)

off in the client end, the entity is again broken into multiple parts:
information about the model is passed to the renderer, and any
light-sources/... may themselves be separately split off and passed off
to the renderer;
also, any sound-effects are converted into commands, and passed off to
the sound mixer.

then the renderer sees its representation of the object:
as a 3D model sitting in its scene-graph in a specific pose at this
moment in time, which it may then perform a number of operations on
(figuring out which light sources are applicable, drawing lights and
shadows, ...).

and, in the sound mixer, all there is is just a moving point in space
working as a sound emitter, as it calculates (from where it is and how
quickly it is moving relative to the camera) its effective attenuation
(volume, pan, ...), Doppler-shifts, post-processing effects (such as
echos or dampening), ...

...

but, the point is that there is no single or unified view of the object,
but rather all knowledge of the object is broken down into a large
number of subsystems, each knowing small pieces of the problem, and
nearly everything else is shuffling information back and forth to keep
everything synchronized.

nevermind all the stuff going on, independently, in the video-card,
monitor, mouse and keyboard, ...


generally, compiler and VM technology happens to work roughly the same
way...

but, why should the user need to know or care, as they work worth the
"unified" perception, the box as it falls to the ground, and 3D NPC's
which walk around, do things, and say so many words of dialogue to the
player.


so, the main issue should be, IMO, not of eliminating text, rather,
maybe trying to reduce the inconveniences of rebuilding (ideally, so
that the "rebuild" is itself nearly invisible), and getting performance
fast enough that the programmer doesn't feel the need to wander off and
get coffee or similar every time they have to rebuild their program...

for example, if the environment could be like "well, only this file was
changed" and quickly recompile and hot-patch it in to a live program,
who cares that a compiler and linker were involved, if they add at most
a few milliseconds?...
Post by Julian Leviston
Mind you, I think human language is fairly silly, too... we
communicate using mental "bubbles" of non-language based patterns,
rendered into language, formed into text. It's well retarded... but
this might be considered a little "out there", so I'll end here.
If I'm providing too much "noise" for the list, please anyone, let me
know, and I'll be quiet.
well, I somewhat disagree here...
C. Scott Ananian
2011-06-13 21:16:10 UTC
Permalink
Post by Julian Leviston
Consider what it'd be like if we didn't represent code as text... and
represented it maybe as series of ideograms or icons (TileScript nod).
Syntax errors don't really crop up any more, do they? Given a slightly nicer
User Interface than tilescript, you could still type your code, (ie use the
keyboard to fast-select tokens), but the computer won't "validate" any input
that isn't in its "dictionary" of known possible syntactically correct items
given whatever context you're in.
I think "Tiles prevent syntax errors" is a red herring. Sure, you can
prevent stupid typos by offering only tiles with correctly spelled
keywords, but that's not really a major problem in ordinary
experience. The more pernicious errors aren't especially affected one
way or the other by tile-based systems. (You could just as accurately
say that strongly-typed systems prevent errors.)
Post by Julian Leviston
given that most non-Chinese can't read Chinese writing, despite that many of
these characters do actually resemble crude line-art drawings of various
things and ideas.
It is a common linguistic misperception that there is some one-to-one
correspondence between an ideogram and the idea it represents. The
english letter "A" was originally a drawing of an ox-head.
(http://en.wikipedia.org/wiki/A). It is as accurate to say that
English letters resemble "crude line-art drawings" as to say that
Chinese ideograms do.
Post by Julian Leviston
and meanwhile, many Asian countries either have shifted to, or are in the
process of shifting to, the use of phonetic writing systems (Koreans created
Hangul, Kanji gradually erodes in favor of Hiragana, ...). even in some
places in China (such as Canton) the traditional writing system is
degrading, with many elements of their spoken dialect being incorporated
into the written language.
This is also playing fast and loose with linguistics. Let's be wary
of drawing analogies to fields where we are not expert.
--scott
--
      ( http://cscott.net )
Casey Ransberger
2011-06-13 21:33:57 UTC
Permalink
Below.
Post by C. Scott Ananian
Post by Julian Leviston
Consider what it'd be like if we didn't represent code as text... and
represented it maybe as series of ideograms or icons (TileScript nod).
Syntax errors don't really crop up any more, do they? Given a slightly nicer
User Interface than tilescript, you could still type your code, (ie use the
keyboard to fast-select tokens), but the computer won't "validate" any input
that isn't in its "dictionary" of known possible syntactically correct items
given whatever context you're in.
I think "Tiles prevent syntax errors" is a red herring. Sure, you can
prevent stupid typos by offering only tiles with correctly spelled
keywords, but that's not really a major problem in ordinary
experience. The more pernicious errors aren't especially affected one
way or the other by tile-based systems. (You could just as accurately
say that strongly-typed systems prevent errors.)
Agreed, when we're talking about adults, and especially ones who've already learned to code. When it comes to kids and non-programming adults though, I do think that e.g. Scratch is really powerful.

I don't have the cognitive science to back up the statement that I'm about to make, so I'm hoping folks will try to shoot some holes in it.

Kids may not have the linguistic development out of the way that one needs to do "serious" programming. Adults who don't already code may find themselves short on some of the core concepts that conventional programming languages expect of the user. In both cases, I think visual systems can get useless syntactic hurdles out of the way, so that users can focus of developing a command of the core concepts at work.

Inviting criticism! Fire away, ladies and gentlemen.
Julian Leviston
2011-06-14 03:09:04 UTC
Permalink
Post by Casey Ransberger
Kids may not have the linguistic development out of the way that one needs to do "serious" programming. Adults who don't already code may find themselves short on some of the core concepts that conventional programming languages expect of the user. In both cases, I think visual systems can get useless syntactic hurdles out of the way, so that users can focus of developing a command of the core concepts at work.
In most parts of the world, Monks used to be the only people who could read and write, you know. ;-)

Julian.
BGB
2011-06-14 19:33:36 UTC
Permalink
Post by Julian Leviston
Post by Casey Ransberger
Kids may not have the linguistic development out of the way that one needs to do "serious" programming. Adults who don't already code may find themselves short on some of the core concepts that conventional programming languages expect of the user. In both cases, I think visual systems can get useless syntactic hurdles out of the way, so that users can focus of developing a command of the core concepts at work.
In most parts of the world, Monks used to be the only people who could read and write, you know. ;-)
I started out messing around with computers in maybe 3rd and 4th grade,
mostly as that was when I started having them around...

personally, the textual nature of code was not such an issue, but I do
remember at the time having a little confusion over the whole "order of
operations" thing (I think I was left to wonder some why some operations
would bind more tightly than others, they did not mention it in
classes). at the time, it was mostly QBasic and DOS...

much younger, and it is doubtful people can do much of anything useful.
can you teach programming to a kindergartner?...
maybe not such a good idea, so, it is an issue for what a good
lower-limit is for where to try.

ultimately, maybe the whole topic is beyond the reach of many people
(like, maybe ability to program is more of an inherent ability, rather
than a learned skill?...). in this case, one can just make things
available, and see who catches on...


I don't necessarily think graphics are the answer though. people learn
to read and write either way, and using graphics seems a bit much like a
vain attempt at trying to water down the topic.

or such...
Dethe Elza
2011-06-14 20:09:34 UTC
Permalink
Post by BGB
much younger, and it is doubtful people can do much of anything useful.
can you teach programming to a kindergartner?...
maybe not such a good idea, so, it is an issue for what a good lower-limit is for where to try.
My kids learned to program around kindergarten/first grade. My son started with Scratch when he was six and is now teaching himself Javascript/Raphael and HTML/CSS at age 10.

One advantage graphic tools like Scratch have for younger learners is that they don't have to know how to type, just read and write. Having the syntax enforced by the structure of the blocks helps too (no typos, no syntax errors, in addition to the aforementioned enforced strong typing).

--Dethe
Kevin Driedger
2011-06-14 23:00:10 UTC
Permalink
I wonder if a thousand years ago the readers of the world thought that only
certain people had an aptitude for reading.
=====

As a professional coder and father of young children I find Dethe's anecdote
of teaching his children to code/program at an early age has me thinking I
need to take another stab at showing Scratch again to my children.
Post by BGB
Post by BGB
much younger, and it is doubtful people can do much of anything useful.
can you teach programming to a kindergartner?...
maybe not such a good idea, so, it is an issue for what a good
lower-limit is for where to try.
My kids learned to program around kindergarten/first grade. My son started
with Scratch when he was six and is now teaching himself Javascript/Raphael
and HTML/CSS at age 10.
One advantage graphic tools like Scratch have for younger learners is that
they don't have to know how to type, just read and write. Having the syntax
enforced by the structure of the blocks helps too (no typos, no syntax
errors, in addition to the aforementioned enforced strong typing).
--Dethe
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Julian Leviston
2011-06-14 03:17:29 UTC
Permalink
Post by C. Scott Ananian
Post by Julian Leviston
Consider what it'd be like if we didn't represent code as text... and
represented it maybe as series of ideograms or icons (TileScript nod).
Syntax errors don't really crop up any more, do they? Given a slightly nicer
User Interface than tilescript, you could still type your code, (ie use the
keyboard to fast-select tokens), but the computer won't "validate" any input
that isn't in its "dictionary" of known possible syntactically correct items
given whatever context you're in.
I think "Tiles prevent syntax errors" is a red herring. Sure, you can
prevent stupid typos by offering only tiles with correctly spelled
keywords, but that's not really a major problem in ordinary
experience. The more pernicious errors aren't especially affected one
way or the other by tile-based systems. (You could just as accurately
say that strongly-typed systems prevent errors.)
When you're about to type the next "tile", you're given options... anything outside of those options is impossible, so the computer doesn't put it in, because syntactically it wouldn't make sense. Do you see the power of options, here? :) It's another level of introspection for the programmer on the system that is possible if they need or want it.

<shrug> some people like the computer to do things like highlight matching parenthesis, provide code syntax highlighting and colouring... others don't. (I'm not sure who doesn't).

But we're kind of digressing from the point about the kinds of visual systems that I was originally talking about when mentioning TileScript. This isn't necessarily at all TileScript I'm talking about... it's about visual patterning "languages" (i'm using the term languages very loosely here). TileScript was simply a nod... If you've used any kind of visual math formula builder like the one that used to be present in Microsoft Word I think (and probably still is, I don't know), then you know what I'm talking about.. the syntax is visually patterned in front of you as soon as it becomes apparent to the computer that you're writing a certain kind of math, so you can see what's going on... this stuff is very useful, I'm not sure why you can't see the benefit of it... perhaps you're just too attached to text? :)

As my memory recalls, Alan (and the entire VPRI crew I think) has said in the past, Math wins. Math is not written as a linear "text" per se, is it? Except, of course, where sequence is important ;-)

Julian.
C. Scott Ananian
2011-06-14 03:50:59 UTC
Permalink
Post by Julian Leviston
I think "Tiles prevent syntax errors" is a red herring.  Sure, you can
prevent stupid typos by offering only tiles with correctly spelled
keywords, but that's not really a major problem in ordinary
experience.  The more pernicious errors aren't especially affected one
way or the other by tile-based systems.  (You could just as accurately
say that strongly-typed systems prevent errors.)
When you're about to type the next "tile", you're given options... anything outside of those options is impossible, so the computer doesn't put it in, because syntactically it wouldn't make sense.
There's nothing specific to tiles in what you wrote. You could do the
same just as easily with a keyboard-based system.
This is what I mean when I say that "tiles prevent syntax errors" is
not accurate; it's confusing two separate things.
Again: more accurately you could say, "strong typing can prevent
syntax errors"... tiles have nothing to do with it, really.
--scott
--
      ( http://cscott.net )
Julian Leviston
2011-06-14 04:35:48 UTC
Permalink
Post by C. Scott Ananian
Post by Julian Leviston
When you're about to type the next "tile", you're given options... anything outside of those options is impossible, so the computer doesn't put it in, because syntactically it wouldn't make sense.
There's nothing specific to tiles in what you wrote. You could do the
same just as easily with a keyboard-based system.
This is what I mean when I say that "tiles prevent syntax errors" is
not accurate; it's confusing two separate things.
Again: more accurately you could say, "strong typing can prevent
syntax errors"... tiles have nothing to do with it, really.
Assuming a "compile after composing" type of system. If it's a running, live, system, then "type" is irrelevant because an "object" at the point of being "talked to" will provide its own semantic and therefore syntactic-appropriateness context (ie duck typing for want of a better term). Do you see why I think text-stream based systems are outmoded crappy systems yet?

They're not "real" in the sense of first-level representational. It's the equivalent of me sending you this email by fax, and you running an OCR program across it so it can get into your computer, though obviously less error-prone.

However... not to be rude, but you're potentially missing my larger point, which was underneath the two lines you quoted... and you're perhaps getting "caught" on my bad example of syntax in TileScript - I'm saying the possibilities and ramifications of programming a live system using non-text-stream representation are far greater than that of text-stream ones... either that, or we have to re-engineer the natural possibilities after the fact... (ie Eclipse Java IDE is an example of doing this... where the IDE knows a lot about the language, rather than asking real live objects about themselves). Instead of having actual one-level-linked instantiated objects AT THE POINT of programming, we use multi-layered deferred referencing (ie text-stream based "codes" which are later processed and further decoded into computer codes by another computer program many times).

One of the troubles with computing is that there are so many layers between what's real and the user that we've forgotten how to deal directly with the real. We've forgotten what is happening when we use computers, and this is sad and needs to be addressed. It's the real that's exciting, interesting and impassion-ating...

... granted there will always be those who don't want to see the real, and for those people, we build layers on top (ie Apple products), but still allow the guts to be got at by those who wish it. Presently it's SO DIFFICULT to get at the guts and not because it's hard to fire up GCC... mostly it's the learning that gets in the way... (at least, that's my experience). The sheer amount of education one needs to get through before one can get to the point where one is a true systems expert on our current top-level systems is colossal, and this is mostly due to cruft, IMHO.

Julian.
Dale Schumacher
2011-06-14 13:10:37 UTC
Permalink
Perhaps I can help you avoid talking past each other.
Post by Julian Leviston
Post by Julian Leviston
When you're about to type the next "tile", you're given options... anything outside of those options is impossible, so the computer doesn't put it in, because syntactically it wouldn't make sense.
There's nothing specific to tiles in what you wrote.  You could do the
same just as easily with a keyboard-based system.
This is what I mean when I say that "tiles prevent syntax errors" is
not accurate; it's confusing two separate things.
Again: more accurately you could say, "strong typing can prevent
syntax errors"...  tiles have nothing to do with it, really.
Assuming a "compile after composing" type of system. If it's a running, live, system, then "type" is irrelevant because an "object" at the point of being "talked to" will provide its own semantic and therefore syntactic-appropriateness context (ie duck typing for want of a better term). Do you see why I think text-stream based systems are outmoded crappy systems yet?
The "strong typing" Scott is talking about is the equivalent of the
snap-together shapes on the tile edges. Another equivalence is the
choice-list of options you may be presented in an IDE. Both constrain
(or guide, in the case of code-completion) your options based on the
"type" of the interaction. Without some kind of type information
there is no meaningful way to constrain the options, graphical or
otherwise.

Hopefully, I haven't misrepresented either of you, or added my own
confusion to the conversation.

[snip]
Post by Julian Leviston
Instead of having actual one-level-linked instantiated objects AT THE POINT of programming, we use multi-layered deferred referencing (ie text-stream based "codes" which are later processed and further decoded into computer codes by another computer program many times).
One perspective on designing in solution-space is to visualize the
object (or actor) instances and their interactions during the
evolution of a computation. For me, syntax follows structure. That
is, I first think about the objects and their connections, both static
and dynamic, in a sort of general nodes-and-edges graph structure.
Then I try to map the graph into a linear textual representation. The
result is often frustrating, since linear streams of characters make
some graph structures awkward to express. I expect that direct
visualization of the object graph would be helpful here. On the other
hand, I find that the behavior of the objects (actors) is easier to
grasp in a compact textual representation, even though the behavior
description has its own (fractal?) graph structure.

In any case, the graph structure carries no type information. It is
just a reference/reachability graph, constraining potential
communication (a reference is required for interaction). The content
(type) of the communication is not constrained.
Yoshiki Ohshima
2011-06-14 03:39:30 UTC
Permalink
At Mon, 13 Jun 2011 17:16:10 -0400,
Post by C. Scott Ananian
Post by Julian Leviston
given that most non-Chinese can't read Chinese writing, despite that many of
these characters do actually resemble crude line-art drawings of various
things and ideas.
It is a common linguistic misperception that there is some one-to-one
correspondence between an ideogram and the idea it represents. The
english letter "A" was originally a drawing of an ox-head.
(http://en.wikipedia.org/wiki/A). It is as accurate to say that
English letters resemble "crude line-art drawings" as to say that
Chinese ideograms do.
Post by Julian Leviston
and meanwhile, many Asian countries either have shifted to, or are in the
process of shifting to, the use of phonetic writing systems (Koreans created
Hangul, Kanji gradually erodes in favor of Hiragana, ...). even in some
places in China (such as Canton) the traditional writing system is
degrading, with many elements of their spoken dialect being incorporated
into the written language.
This is also playing fast and loose with linguistics. Let's be wary
of drawing analogies to fields where we are not expert.
Yup. Such as this:

http://pugs.blogs.com/audrey/2009/10/our-paroqial-fermament-one-tide-on-another.html

is mainly in the context of # of characters, but also illustrates the
area it requires to convey the same amount of information.

(And yup, I can tell you Japanese aren't erodes in favor of
Hiragana...)

-- Yoshiki
BGB
2011-06-14 05:02:46 UTC
Permalink
Post by Yoshiki Ohshima
At Mon, 13 Jun 2011 17:16:10 -0400,
Post by C. Scott Ananian
Post by Julian Leviston
given that most non-Chinese can't read Chinese writing, despite that many of
these characters do actually resemble crude line-art drawings of various
things and ideas.
It is a common linguistic misperception that there is some one-to-one
correspondence between an ideogram and the idea it represents. The
english letter "A" was originally a drawing of an ox-head.
(http://en.wikipedia.org/wiki/A). It is as accurate to say that
English letters resemble "crude line-art drawings" as to say that
Chinese ideograms do.
except that in the case of the Latin alphabet, all association with the
original idea has long since gone away, and alphabetic characters have
no real meaning in themselves, besides a loose association with a
particular sound.

the pictographs generally have meanings more closely associated with
particular things and idea.

like, a tree sort of resembles a tree, ...
Post by Yoshiki Ohshima
Post by C. Scott Ananian
Post by Julian Leviston
and meanwhile, many Asian countries either have shifted to, or are in the
process of shifting to, the use of phonetic writing systems (Koreans created
Hangul, Kanji gradually erodes in favor of Hiragana, ...). even in some
places in China (such as Canton) the traditional writing system is
degrading, with many elements of their spoken dialect being incorporated
into the written language.
This is also playing fast and loose with linguistics. Let's be wary
of drawing analogies to fields where we are not expert.
http://pugs.blogs.com/audrey/2009/10/our-paroqial-fermament-one-tide-on-another.html
is mainly in the context of # of characters, but also illustrates the
area it requires to convey the same amount of information.
(And yup, I can tell you Japanese aren't erodes in favor of
Hiragana...)
sorry...

just I thought it was that originally nearly all of the writing was
using Kanji, but over a long time (many centuries), the use of Kanji has
lessened some, and Hiragana has become a larger portion of the text.

admittedly, I don't really know Japanese either though... (besides what
I can gain from watching lots of anime...).

I have had some (limited) amount of personal experience interacting with
Chinese people, but don't know Chinese either (can't really
read/write/speak it, but can recognize a few of the basic characters...).


by complaining about "density" previously, I wasn't thinking like
traditional pictographs though, so much as people doing similar with
icons, say 64x64 pixels or so (like, more like Windows icons), and so
would lead to a larger portion of the screen being needed than with
words, or with the tradition of assigning meaning to particular globs of
ASCII characters (say, "->", "<=", "<=>", "++", ...).

or, people using UML diagrams and flow charts, which use large amounts
of screen or paper space, and often express less than could be stated
with a few lines of text.

and also, that I don't personally believe a pictographic system to be
inherently more "intuitive" than an equivalent word-based system, and
maybe less so, given the general "tool tips" experience (like, hover
mouse to try to figure out just what a given icon on the toolbar is
supposed to do...).


or such...
Julian Leviston
2011-06-14 03:02:46 UTC
Permalink
but, what would be the gain?... the major issue with most possible graphical representations, is that they are far less compact. hence, the common use of graphical presentations to represent a small amount in information in a "compelling" way (say, a bar-chart or line-graph which represents only a small number of data-points).
If it gets longer than a page, something's gone wrong somewhere. ;-) Remember, most people on this list will think encapsulation and objects are good things ;-) (ie small bits of code). So you don't need the kind of compactness you're talking about.

The gain is that "a picture speaks a thousand words".
John Nilsson
2011-06-14 13:22:04 UTC
Permalink
but, what would be the gain?... the major issue with most possible graphical
representations, is that they are far less compact. hence, the common use of
graphical presentations to represent a small amount in information in a
"compelling" way (say, a bar-chart or line-graph which represents only a
small number of data-points).
I too have been thinking that the textual representation has to go.
Not to replace it with fancy icons but mainly to remove the limits
imposed by parsing, but also to make graphical representations
available, for example tables is a language construct I usually miss,
arrays of arrays of objects just doesn't cut it.

By parsing limits I mean the fact that the language grammar usually
has to be more verbose than is required by a human to resolve
ambiguity and other issues. This is mainly a problem if you start
thinking of how to mix languages. To integrates say Java, SQL and
regular expressions in one grammar. Sure it can be done by careful
attention to the grammar, like PL/SQL f.ex. but how do you do it in a
generic way such that DSLs can be created as libraries by application
programmers?

BR,
John
Tristan Slominski
2011-06-14 13:27:19 UTC
Permalink
Post by John Nilsson
By parsing limits I mean the fact that the language grammar usually
has to be more verbose than is required by a human to resolve
ambiguity and other issues. This is mainly a problem if you start
thinking of how to mix languages. To integrates say Java, SQL and
regular expressions in one grammar. Sure it can be done by careful
attention to the grammar, like PL/SQL f.ex. but how do you do it in a
generic way such that DSLs can be created as libraries by application
programmers?
BR,
John
This looks like a job for OMeta ( http://tinlizzie.org/ometa/ )
Hans-Martin Mosner
2011-06-13 10:35:05 UTC
Permalink
an image based system, OTOH, often means having to drag around the image instead, which may include a bunch of "other
stuff" beyond just the raw text of the program, and may couple the program and the particular development environment
used to create it.
this coupling may be somewhat undesirable, and it is preferable to have the source as files, so then one can "rebuild
from source" whenever this is needed.
also, another risk is of the development image becoming "polluted" as a result of prior actions or code, which may
risk compromising a project.
Just some quick comments from a long-time Smalltalker (25 years):
Of course rebuilding from source is desirable at certain points in time. For example, in the project I'm currently
working on, every production build is created from a clean image into which the application's source code is loaded.
This is a single-click process, just like building a traditional program from source files (except that we use VAST,
where the source code is held in an ENVY repository for version and configuration control instead of plain files).
However, in the development process we rarely rebuild images from scratch. It's not that we depend on image artefacts
created long ago, but we prefer to keep our personal selection of tools and debugging aids. Using ENVY, it's very simple
to sync our actual application source with the newest version while keeping all the tools intact. I normally rebuild my
development image only when we switch to a completely new base software version, or when I've accidentally damaged it to
a point where self-repair is not possible anymore (extremely rare, I've done that maybe 3-4 times during the project's
lifetime of about 12 years).
So, the risk associated with image-based development is mostly a theoretical one. In practice, it can be controlled just
as well as with file-based development, and you get all the advantages of the image on top :-)

Cheers,
Hans-Martin
Dale Schumacher
2011-06-13 13:00:34 UTC
Permalink
Post by BGB
however, unlike full image-based development, the app will generally
"forget" everything that was going on once it is exited and restarted.
I think this is one of the most annoying "features" of our current
computer systems. If I have a project (or 10 or 20 projects) spread
out on my workbench, and I leave to have something to eat, or go to
sleep, when I return everything is still (more or less) in the state I
left it.
Post by BGB
one can open/close/run programs, edit things, ... in Windows, and so long as
it is running, it will remember all this;
but, if/when Windows is rebooted, it will forget, and one starts again with
a "clean slate" of sorts (an empty desktop with only their icons and
start-up programs to greet them...).
but, the merit of rebooting Windows is that it keeps the system "clean", as
running Windows continuously is prone to cause it to degrade over time, and
without an occasional reboot will begin to manifest lots of buggy behavior
and eventually crash (IME, much worse in Vista and Win7 than in XP...).
Long-running stability and continuous upgrading (WITHOUT "rebooting")
should be the norm. There should be no such thing as a "boot"
process. A system should remain stable (and running) throughout a
lifetime of gradual evolution/mutation. Of course, we also need a way
to branch/fork/clone/version and even start-from-embryo, to build new
systems. The next step is to consider how the "system" (or parts of
it) can migrate, or become mobile, among hosts.
Post by BGB
Post by Julian Leviston
and essentially, that's what text-file coding (ie editing "offline" code)
does for us... because things are in files, it's easy to "include" a file as
one packaged unit, or a group of file, or a "package"... and then that
"package" can be referred to... separately, and even maintained by someone
else, and it's not a COPY of the package, it's a reference to it...  you
know? This is incredibly powerful.
yep.
I am generally mostly in favor of using files.
Naming is certainly important, as is contextual reference, but I'm not
convinced that "files" are a necessary part of providing that
mechanism. Consider, as a possible alternative, the idea of
parametrizing a module with its dependencies. This is just the
principle of applying abstractions to allow local naming (aliasing) of
"externally" provided objects.

my_module(something_i_need, ...) = ... module specification using
something_i_need ...

create my_module(provider_of_service, ...)
Casey Ransberger
2011-06-13 19:21:54 UTC
Permalink
Comments below.
Post by Dale Schumacher
Post by BGB
however, unlike full image-based development, the app will generally
"forget" everything that was going on once it is exited and restarted.
I think this is one of the most annoying "features" of our current
computer systems. If I have a project (or 10 or 20 projects) spread
out on my workbench, and I leave to have something to eat, or go to
sleep, when I return everything is still (more or less) in the state I
left it.
Dale, when read this it wasn't clear to me what you meant to convey. Are you saying "it's annoying that when I come back to my bench, I have to swim all the way back to the context I was in before" or are you saying "when I return to my bench, it's annoying to have to close all of that stuff because what I usually want is a new context anyway"?

It seems that persistence of user experience is being naturally selected at the end of the day. Apple did it for iOS, and justified it by way of "users have to get the phone back into the pocket in a hurry sometimes." It seems to have worked well enough for them that they've added this to the Mac in Lion. Redmond has undoubtably noticed.

I've seen both arguments. At times I've wondered if it wasn't a matter of "learning style." I know that I'm definitely in the former camp.
Post by Dale Schumacher
Long-running stability and continuous upgrading (WITHOUT "rebooting")
should be the norm. There should be no such thing as a "boot"
process. A system should remain stable (and running) throughout a
lifetime of gradual evolution/mutation. Of course, we also need a way
to branch/fork/clone/version and even start-from-embryo, to build new
systems. The next step is to consider how the "system" (or parts of
it) can migrate, or become mobile, among hosts.
+ 1

Another thing I'd like to see that I can do with images, but can't do with "dead code," is (easily) set up a server instance such that when it hits an unhanded exception, it saves off an image before it dies, with all of the state and context intact, and a source level debugger open. I want this because a great deal of my blackbox tester's time goes into identifying steps to reproduce.

With the image, one needs to worry about repro steps a lot less, which frees these people up to spend their time on things like exploring systems with deeper probes and crossing the bridge into whitebox land, which is where I want them all to end up: reading the code and automating painful tasks as they test.
Dale Schumacher
2011-06-13 19:59:18 UTC
Permalink
On Mon, Jun 13, 2011 at 2:21 PM, Casey Ransberger
Post by Casey Ransberger
Comments below.
however, unlike full image-based development, the app will generally
"forget" everything that was going on once it is exited and restarted.
I think this is one of the most annoying "features" of our current
computer systems.  If I have a project (or 10 or 20 projects) spread
out on my workbench, and I leave to have something to eat, or go to
sleep, when I return everything is still (more or less) in the state I
left it.
Dale, when read this it wasn't clear to me what you meant to convey. Are you
saying "it's annoying that when I come back to my bench, I have to swim all
the way back to the context I was in before" or are you saying "when I
return to my bench, it's annoying to have to close all of that stuff because
what I usually want is a new context anyway"?
I'm most definitely saying that I prefer the "eternal" (as Alan said)
system, with persistent state.
K. K. Subramaniam
2011-06-13 15:10:53 UTC
Permalink
Post by Julian Leviston
I think the main issue with smalltalk-like "image" systems is that the
system doesn't as easily let you "start from blank" like text-file
source-code style coding does... thats to say, yes, it's possible to start
new worlds, but it's not very easy to reference "bits" of your worlds from
each other...
The pre-req for this is a tool to diff two images to generate a delta that can
be applied to one to produce another. This is easy to do with line-oriented
text files or non-linear xml files but difficult to do with blobs like images.
Tools like xdelta operate only at bit level and not object level.

Of course, if there was a (normalized) way to transcode .image into .xml and
vice versa then xmldiff can be used for that purpose.

Subbu
Alan Kay
2011-06-13 15:17:49 UTC
Permalink
It would be great if everyone on this list would think deeply about how to have
an "eternal" system, and only be amplified by it.

For example, take a look at Alex Warth's "Worlds" work (and paper) and see how
that might be used to deal with larger problems of consistency and version
control in a live system.

I also believe that we should advance things so that there are no hidden
dependencies, and that dependencies are nailed semantically.

Time to move forward ...

Cheers,

Alan




________________________________
From: K. K. Subramaniam <***@gmail.com>
To: ***@vpri.org
Sent: Mon, June 13, 2011 8:10:53 AM
Subject: Re: [fonc] Alternative Web programming models?
Post by Julian Leviston
I think the main issue with smalltalk-like "image" systems is that the
system doesn't as easily let you "start from blank" like text-file
source-code style coding does... thats to say, yes, it's possible to start
new worlds, but it's not very easy to reference "bits" of your worlds from
each other...
The pre-req for this is a tool to diff two images to generate a delta that can
be applied to one to produce another. This is easy to do with line-oriented
text files or non-linear xml files but difficult to do with blobs like images.
Tools like xdelta operate only at bit level and not object level.

Of course, if there was a (normalized) way to transcode .image into .xml and
vice versa then xmldiff can be used for that purpose.

Subbu
Julian Leviston
2011-06-13 16:35:35 UTC
Permalink
It would be great if everyone on this list would think deeply about how to have an "eternal" system, and only be amplified by it.
Hi Alan,

You might need to elucidate a little more on this for me to personally understand you. Not sure how others feel, but the "Worlds" work seems to be just a description of a versioning pattern applied to running program state. Why is it especially interesting? In the Ruby community, we have "gem" which is a package manager and also bundler, the two of which handle dependency management and sets of bundles of dependencies in context and situ elegantly and beautifully. Depending on your requirements when writing code, you can point to "a" version of a gem, the latest version, or say things like "versions greater than 2.3". It works really well. It also fits very neatly with your idea of (Alexander's? ;-)) the arch and biological cellular structure being a scalable system: this system is working in practice extremely well. (Mind you, there's a global namespace, so it will eventually get crowded I'm sure ;-))

What do you mean by an eternal system? Do you mean a system which lasts forever? and what do you mean by amplified? Do you mean amplified as in our energy around this topic, or something else?

Sorry for not understanding you straight away,

Regards,
Julian.
Josh Gargus
2011-06-13 18:07:10 UTC
Permalink
Post by Julian Leviston
It would be great if everyone on this list would think deeply about how to have an "eternal" system, and only be amplified by it.
Hi Alan,
You might need to elucidate a little more on this for me to personally understand you. Not sure how others feel, but the "Worlds" work seems to be just a description of a versioning pattern applied to running program state.
It seems like much more than that to me.
Post by Julian Leviston
Why is it especially interesting? In the Ruby community, we have "gem" which is a package manager and also bundler, the two of which handle dependency management and sets of bundles of dependencies in context and situ elegantly and beautifully. Depending on your requirements when writing code, you can point to "a" version of a gem, the latest version, or say things like "versions greater than 2.3". It works really well. It also fits very neatly with your idea of (Alexander's? ;-)) the arch and biological cellular structure being a scalable system: this system is working in practice extremely well. (Mind you, there's a global namespace, so it will eventually get crowded I'm sure ;-))
Consider that in a Squeak image, the compiled methods are reified as objects. With Worlds, you can make exploratory changes to code in a *complex running system*, and then back out effortlessly if it doesn't work. You just throw away the World containing the modified code as well as the objects that were modified as a side-effect of running the modified code.
Post by Julian Leviston
What do you mean by an eternal system? Do you mean a system which lasts forever?
Yes, I believe that's what Alan means. One things that Worlds do are fill in a gap that prevents Smalltalk-80 from being an eternal system. The problem with Smalltalk is that, although it is is possible to make code changes in a running image, it is also possible to easily trash the image by making the wrong code changes. Furthermore, the more complicated the system that you're building, the easier it is to trash the image.

To successfully build complex systems in Smalltalk, the typical approach is to periodically bootstrap the system by loading code into a fresh image, and running initialization scripts to bring the image up to a start-state. We employed this approach at Qwaq/Teleplace.

Worlds provides a way (or at least points in a direction) to never need to shut down the running system. Any changes made to the system can safely and easily be reverted. I don't know how familiar you are with Croquet, but when I consider this capability in the context of replicated Islands of objects (including code), I find the potential to be breathtaking.
Post by Julian Leviston
and what do you mean by amplified? Do you mean amplified as in our energy around this topic, or something else?
I'm not sure that I understood this, either.

Cheers,
Josh
Post by Julian Leviston
Sorry for not understanding you straight away,
Regards,
Julian.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Julian Leviston
2011-06-13 19:26:00 UTC
Permalink
Post by Josh Gargus
Post by Julian Leviston
It would be great if everyone on this list would think deeply about how to have an "eternal" system, and only be amplified by it.
Hi Alan,
You might need to elucidate a little more on this for me to personally understand you. Not sure how others feel, but the "Worlds" work seems to be just a description of a versioning pattern applied to running program state.
It seems like much more than that to me.
Cool :) I'm not saying it doesn't have interesting ramifications ;-)
Post by Josh Gargus
Post by Julian Leviston
Why is it especially interesting? In the Ruby community, we have "gem" which is a package manager and also bundler, the two of which handle dependency management and sets of bundles of dependencies in context and situ elegantly and beautifully. Depending on your requirements when writing code, you can point to "a" version of a gem, the latest version, or say things like "versions greater than 2.3". It works really well. It also fits very neatly with your idea of (Alexander's? ;-)) the arch and biological cellular structure being a scalable system: this system is working in practice extremely well. (Mind you, there's a global namespace, so it will eventually get crowded I'm sure ;-))
Consider that in a Squeak image, the compiled methods are reified as objects. With Worlds, you can make exploratory changes to code in a *complex running system*, and then back out effortlessly if it doesn't work. You just throw away the World containing the modified code as well as the objects that were modified as a side-effect of running the modified code.
Yes, I've considered this. One of the things that crops up, though, is what about "data"? That is to say, what if one of the "world" experimentations one ends up doing involves experimenting with modifying some of the model of a particular piece of code... and this necessarily involve mutating a data type (ie model parent object or "class" implementation)... how does the idea/system apply to this, especially when considering things from the point of view of concurrent simultaneous users... where one person may mutate the state in the discussed manner, but another person is using a different "world" (a previous one with a different data structure in place)... yet they both still need to use the same data (ie imagine an address book application where one person is mutating the model layer live, and both people are using it live and inputting new data)...

... the idea of published / unpublished (ie published being a particular "world" that is being pointed to as the current "live" one) seems to serve well here.

... also, the idea of modelling change ITSELF is an appealing one in this context, and all changes including data "entry" etc being simply represented as a log of mutations using the command pattern. Thus the data represented in the first "world" would be mutated and "propagated" to the new world (actually more like the "view" of it filtered or some-such) according to the new rules, and the inverse would apply as well...

... of course, the question of irreversible transactions (ie destructive or creationist "commands") arises... what to do about when adding or destroying structure inside a data structure when involving those worlds? (in other words, the second, experimental world perhaps has added a "title" field to a person, and then the second world user adds a new person, with the title field... what does the first world user see?, etc. This is a superficially simple illustration - add some code to the second world which would break if things aren't set up in a particular structure, such as a requirement on the "title" for a person, and then the first world entry not actually having a title, and we get a bit stickier - this particular example falls apart rather easily, but you get the gist, hopefully?)

The worlds idea seems to ignore the fact that the only way to really get the feel for something is to use it... so an experiment (ie a child world) would need to be using real, live data... so a "user" or "programmer" would end up with the painful situation of having to migrate their created application objects - the "data" - back into the parent world but not migrate the code back if the experiment failed... *or* they would have to treat the experimental world as experimental only and not "real", but doing this wouldn't actually allow one to know whether the experiment was working or not... *or* they'd have to use two worlds simultaneously - the first, un-experimental world to just use their application, and the second world to test things out until they were happy that it would work as they anticipated (ie the experiment worked).

Either way, it could bear more thinking...

Julian.
Julian Leviston
2011-06-13 19:31:37 UTC
Permalink
I wrote this without reading the very latest http://www.vpri.org/pdf/tr2011001_final_worlds.pdf so if I say anything that is obviously missing that understanding, please bear with me :) I'll read it shortly.

Julian.
Post by Julian Leviston
Post by Josh Gargus
Post by Julian Leviston
It would be great if everyone on this list would think deeply about how to have an "eternal" system, and only be amplified by it.
Hi Alan,
You might need to elucidate a little more on this for me to personally understand you. Not sure how others feel, but the "Worlds" work seems to be just a description of a versioning pattern applied to running program state.
It seems like much more than that to me.
Cool :) I'm not saying it doesn't have interesting ramifications ;-)
Post by Josh Gargus
Post by Julian Leviston
Why is it especially interesting? In the Ruby community, we have "gem" which is a package manager and also bundler, the two of which handle dependency management and sets of bundles of dependencies in context and situ elegantly and beautifully. Depending on your requirements when writing code, you can point to "a" version of a gem, the latest version, or say things like "versions greater than 2.3". It works really well. It also fits very neatly with your idea of (Alexander's? ;-)) the arch and biological cellular structure being a scalable system: this system is working in practice extremely well. (Mind you, there's a global namespace, so it will eventually get crowded I'm sure ;-))
Consider that in a Squeak image, the compiled methods are reified as objects. With Worlds, you can make exploratory changes to code in a *complex running system*, and then back out effortlessly if it doesn't work. You just throw away the World containing the modified code as well as the objects that were modified as a side-effect of running the modified code.
Yes, I've considered this. One of the things that crops up, though, is what about "data"? That is to say, what if one of the "world" experimentations one ends up doing involves experimenting with modifying some of the model of a particular piece of code... and this necessarily involve mutating a data type (ie model parent object or "class" implementation)... how does the idea/system apply to this, especially when considering things from the point of view of concurrent simultaneous users... where one person may mutate the state in the discussed manner, but another person is using a different "world" (a previous one with a different data structure in place)... yet they both still need to use the same data (ie imagine an address book application where one person is mutating the model layer live, and both people are using it live and inputting new data)...
... the idea of published / unpublished (ie published being a particular "world" that is being pointed to as the current "live" one) seems to serve well here.
... also, the idea of modelling change ITSELF is an appealing one in this context, and all changes including data "entry" etc being simply represented as a log of mutations using the command pattern. Thus the data represented in the first "world" would be mutated and "propagated" to the new world (actually more like the "view" of it filtered or some-such) according to the new rules, and the inverse would apply as well...
... of course, the question of irreversible transactions (ie destructive or creationist "commands") arises... what to do about when adding or destroying structure inside a data structure when involving those worlds? (in other words, the second, experimental world perhaps has added a "title" field to a person, and then the second world user adds a new person, with the title field... what does the first world user see?, etc. This is a superficially simple illustration - add some code to the second world which would break if things aren't set up in a particular structure, such as a requirement on the "title" for a person, and then the first world entry not actually having a title, and we get a bit stickier - this particular example falls apart rather easily, but you get the gist, hopefully?)
The worlds idea seems to ignore the fact that the only way to really get the feel for something is to use it... so an experiment (ie a child world) would need to be using real, live data... so a "user" or "programmer" would end up with the painful situation of having to migrate their created application objects - the "data" - back into the parent world but not migrate the code back if the experiment failed... *or* they would have to treat the experimental world as experimental only and not "real", but doing this wouldn't actually allow one to know whether the experiment was working or not... *or* they'd have to use two worlds simultaneously - the first, un-experimental world to just use their application, and the second world to test things out until they were happy that it would work as they anticipated (ie the experiment worked).
Either way, it could bear more thinking...
Julian.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
karl ramberg
2011-06-13 19:55:54 UTC
Permalink
Post by Julian Leviston
I wrote this without reading the very latest
http://www.vpri.org/pdf/tr2011001_final_worlds.pdf so if I say anything
that is obviously missing that understanding, please bear with me :) I'll
read it shortly.
I got wondering about commit failure and cases where you needed certain
objects in the world child anyway.
Or two different worlds merging. Will that be possible ?

NB! A link in document
http://www.vpri.org/pdf/tr2011001_final_worlds.pdf didnt
work
http://www.tinlizzie.org/%CB%9Cawarth/worlds
this works
http://www.tinlizzie.org/~awarth/worlds/

Karl
Post by Julian Leviston
Julian.
It would be great if everyone on this list would think deeply about how to
have an "eternal" system, and only be amplified by it.
Hi Alan,
You might need to elucidate a little more on this for me to personally
understand you. Not sure how others feel, but the "Worlds" work seems to be
just a description of a versioning pattern applied to running program state.
It seems like much more than that to me.
Cool :) I'm not saying it doesn't have interesting ramifications ;-)
Why is it especially interesting? In the Ruby community, we have "gem"
which is a package manager and also bundler, the two of which handle
dependency management and sets of bundles of dependencies in context and
situ elegantly and beautifully. Depending on your requirements when writing
code, you can point to "a" version of a gem, the latest version, or say
things like "versions greater than 2.3". It works really well. It also fits
very neatly with your idea of (Alexander's? ;-)) the arch and biological
cellular structure being a scalable system: this system is working in
practice extremely well. (Mind you, there's a global namespace, so it will
eventually get crowded I'm sure ;-))
Consider that in a Squeak image, the compiled methods are reified as
objects. With Worlds, you can make exploratory changes to code in a
*complex running system*, and then back out effortlessly if it doesn't work.
You just throw away the World containing the modified code as well as the
objects that were modified as a side-effect of running the modified code.
Yes, I've considered this. One of the things that crops up, though, is what
about "data"? That is to say, what if one of the "world" experimentations
one ends up doing involves experimenting with modifying some of the model of
a particular piece of code... and this necessarily involve mutating a data
type (ie model parent object or "class" implementation)... how does the
idea/system apply to this, especially when considering things from the point
of view of concurrent simultaneous users... where one person may mutate the
state in the discussed manner, but another person is using a different
"world" (a previous one with a different data structure in place)... yet
they both still need to use the same data (ie imagine an address book
application where one person is mutating the model layer live, and both
people are using it live and inputting new data)...
... the idea of published / unpublished (ie published being a particular
"world" that is being pointed to as the current "live" one) seems to serve
well here.
... also, the idea of modelling change ITSELF is an appealing one in this
context, and all changes including data "entry" etc being simply represented
as a log of mutations using the command pattern. Thus the data represented
in the first "world" would be mutated and "propagated" to the new world
(actually more like the "view" of it filtered or some-such) according to the
new rules, and the inverse would apply as well...
... of course, the question of irreversible transactions (ie destructive or
creationist "commands") arises... what to do about when adding or destroying
structure inside a data structure when involving those worlds? (in other
words, the second, experimental world perhaps has added a "title" field to a
person, and then the second world user adds a new person, with the title
field... what does the first world user see?, etc. This is a superficially
simple illustration - add some code to the second world which would break if
things aren't set up in a particular structure, such as a requirement on the
"title" for a person, and then the first world entry not actually having a
title, and we get a bit stickier - this particular example falls apart
rather easily, but you get the gist, hopefully?)
The worlds idea seems to ignore the fact that the only way to really get
the feel for something is to use it... so an experiment (ie a child world)
would need to be using real, live data... so a "user" or "programmer" would
end up with the painful situation of having to migrate their created
application objects - the "data" - back into the parent world but not
migrate the code back if the experiment failed... *or* they would have to
treat the experimental world as experimental only and not "real", but doing
this wouldn't actually allow one to know whether the experiment was working
or not... *or* they'd have to use two worlds simultaneously - the first,
un-experimental world to just use their application, and the second world to
test things out until they were happy that it would work as they anticipated
(ie the experiment worked).
Either way, it could bear more thinking...
Julian.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Yoshiki Ohshima
2011-06-14 03:35:33 UTC
Permalink
At Mon, 13 Jun 2011 21:55:54 +0200,
I got wondering about commit failure and cases where you needed certain objects in the world child anyway.
Or two different worlds merging. Will that be possible ?
Yes. You catch an exception to keep the computation going:

a := WPoint2 new x: 1; y: 0.
w := WWorld2 thisWorld sprout.
w eval: [a y: a y + 1].
a y: 666.
[w commit] on: Error do: [:ex | ].

then you can say:

b := WPoint2 new.
b x: (w eval: [a x]).
b y: (w eval: [a y]).

to "salvage" the values of a in w into b in the top level world.

There should be more first class operations allowed, and perhaps the
serializability checks and commit logic should be customizable...

-- Yoshiki
John Nilsson
2011-06-14 13:06:31 UTC
Permalink
Post by Julian Leviston
... also, the idea of modelling change ITSELF is an appealing one in this
context, and all changes including data "entry" etc being simply represented
as a log of mutations using the command pattern. Thus the data represented
in the first "world" would be mutated and "propagated" to the new world
(actually more like the "view" of it filtered or some-such) according to the
new rules, and the inverse would apply as well...
There might be some experience on this in the CQRS-community. They
usually model systems with event sourcing as the primary
representation of state and have to deal with the versioning issues.

Then there's the experience with working with databases, both
relational and OO. The practice that seems to work is to model new
versions in a backwards-compatible way and then refactor once the old
versions has been completely shut down.

My own thinking in this area is that you handle merges automatically
if you can but fall back on manual intervention if not. Hopefully the
system has a user-base that knows what to do with inconsistent data.

In any case I guess the default behaviour when branching is to simply
diverge, merges in any direction, should only happen if asked to, and
when asked to. The Git workflow seems to work very well. If there is
anything broken with it it is that it tends to express dependencies
that aren't really there, but that isn't a fundamental property of the
DAG-model just a consequence of how the tools steers you. The Darcs
approach with it's theory of patches be better in this regard, have no
experience working with it though.

BR,
John
Casey Ransberger
2011-06-13 18:48:45 UTC
Permalink
Amplification: if I wagered a guess, I'd go with "of human reach" or "of potential leverage."

I also have one amp that goes up to 11, which is really nice because sometimes I like a touch of extra kick for the solo.
Post by Julian Leviston
It would be great if everyone on this list would think deeply about how to have an "eternal" system, and only be amplified by it.
Hi Alan,
You might need to elucidate a little more on this for me to personally understand you. Not sure how others feel, but the "Worlds" work seems to be just a description of a versioning pattern applied to running program state. Why is it especially interesting? In the Ruby community, we have "gem" which is a package manager and also bundler, the two of which handle dependency management and sets of bundles of dependencies in context and situ elegantly and beautifully. Depending on your requirements when writing code, you can point to "a" version of a gem, the latest version, or say things like "versions greater than 2.3". It works really well. It also fits very neatly with your idea of (Alexander's? ;-)) the arch and biological cellular structure being a scalable system: this system is working in practice extremely well. (Mind you, there's a global namespace, so it will eventually get crowded I'm sure ;-))
What do you mean by an eternal system? Do you mean a system which lasts forever? and what do you mean by amplified? Do you mean amplified as in our energy around this topic, or something else?
Sorry for not understanding you straight away,
Regards,
Julian.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Bert Freudenberg
2011-06-09 16:31:51 UTC
Permalink
"NaCl" is not a perfect solution, if anything, because, say, x86 NaCl apps don't work on x86-64 or ARM. nicer would be able to be able to run it natively, if possible, or JIT it to the native ISA if not.
Work is underway on "PNaCl" which uses LLVM bitcode files as platform-independent format for executables:

http://www.chromium.org/nativeclient/pnacl/

- Bert -
Josh Gargus
2011-06-09 17:25:11 UTC
Permalink
Post by BGB
Post by Josh Gargus
Post by Alan Kay
Hi Cornelius
There are lots of egregiously wrong things in the web design. Perhaps one of the simplest is that the browser folks have lacked the perspective to see that the browser is not like an application, but like an OS. i.e. what it really needs to do is to take in and run foreign code (including low level code) safely and coordinate outputs to the screen (Google is just starting to realize this with NaCl after much prodding and beating.)
I think everyone can see the implications of these two perspectives and what they enable or block
Some of the implications, anyway. The benefits of the OS-perspective are clear. Once it hits its stride, there will be no (technical) barriers to deploying the sorts of systems that we talk about here (Croquet-Worlds-Frank-OMeta-whatnot). Others will be doing their own cool things, and there will be much creativity and innovation.
However, elsewhere in this thread it is noted that the HTML-web is structured-enough to be indexable, mashupable, and so forth. It makes me wonder: is there a risk that the searchability, etc. of the web will be degraded by the appearance of a number of mutually-incompatible better-than-HTML web technologies? Probably not... in the worst case, someone who wants to be searchable can also publish in the "legacy" format.
However, can we do better than that? I guess the answer depends on which aspect of the status quo we're trying to improve on (searchability, mashups, etc). For search, there must be plenty of technologies that can improve on HTML by decoupling search-metadata from presentation/interaction (such as OpenSearch, mentioned elsewhere in this thread). Mashups seem harder... maybe it needs to happen organically as some of the newly-possible systems find themselves converging in some areas.
But I'm not writing because I know the answers, but rather the opposite. What do you think?
hmm... it is a mystery....
actually, possibly a relevant question here, would be why Java applets largely fell on their face, but Flash largely took off (in all its uses from YouTube to "Punch The Monkey"...).
writing browser plug-ins...
and, for browser-as-OS, what exactly will this mean, technically?...
network-based filesystem and client-side binaries and virtual processes?...
like, say, if one runs a tiny sand-boxed Unix-like system inside the browser, then push or pull binary files, which are executed, and may perform tasks?...
This isn't quite what I had in mind. Perhaps "hypervisor" is better than "OS" to describe what I'm talking about, and I believe Alan too: a thin-as-possible platform that provides access to computing resources such as end-user I/O (mouse, multitouch, speakers, display, webcam, etc.), CPU/GPU, local persistant storage, and network. Just enough to enable others to run OSes on top of this hypervisor.

If it tickles you fancy, then by all means use it to run a sand-boxed Unix. Undoubtedly someone will; witness the cool hack to run Linux in the browser, accomplished by writing an x86 emulator in Javascript (http://bellard.org/jslinux/).

However, such a hypervisor will also host more ambitious OSes, for example, platforms for persistent capability-secure peer-to-peer real-time collaborative end-use-scriptable augmented-reality environments. (again, trying to use word-associations to roughly sketch what I'm referring to, as I did earlier with "Croquet-Worlds-Frank-OMeta-whatnot").

Does this make my original question clearer?

Cheers,
Josh
Toby Watson
2011-06-09 17:56:24 UTC
Permalink
How about _recursive_ VM/JITs *beneath* the level that HTML/JS is implemented.

So the "browser" that ships only supports this recursive VM.

HTML is an application of this that can be evolved by open source at
internet scale / time. Web pages can point at a specific HTML
implementation or a general redirector like google apis to get the
commonly agreed standard version.

Other containers/'plugins', Squeak, Flash, Java run as VMs, can run
their native bytecode/images but also, potentially, expose the VM
interface up again. Nesting VMs is useful also. Though you won't spare
the use-case any love, Flash video players often load multiple ads
SDKs, an arrangement that could benefit from isolation, i.e.
browser-more-like-OS.

If the top and bottom VM interfaces are the same then we can stack
them (as well as nesting them).

Base VM would have exokernel / NaCL like exposure of the native
capabilities of the device. Exokernel & FluxOS have some nifty tricks
to punch through layers so performance is not so impacted by stacking.

An intermediate VM layer could provide ISA / hardware abstraction so
that everything above that looks the same.

I re-read history of Smalltalk recently and was reminded of this from Alan,

'Bob Barton, the main designer of the B5000 and a professor at Utah
had said in one of his talks a few days earlier: "The basic principal
of recursive design is to make the parts have the same power as the
whole." For the first time I thought of the whole as the entire
computer and wondered why anyone would want to divide it up into
weaker things called data structures and procedures. Why not divide it
up into little computers, as time sharing was starting to? But not in
dozens. Why not thousands of them, each simulating a useful
structure?'

Toby
Post by BGB
Post by Alan Kay
Hi Cornelius
There are lots of egregiously wrong things in the web design. Perhaps one of the simplest is that the browser folks have lacked the perspective to see that the browser is not like an application, but like an OS. i.e. what it really needs to do is to take in and run foreign code (including low level code) safely and coordinate outputs to the screen (Google is just starting to realize this with NaCl after much prodding and beating.)
I think everyone can see the implications of these two perspectives and what they enable or block
Some of the implications, anyway.  The benefits of the OS-perspective are clear.  Once it hits its stride, there will be no (technical) barriers to deploying the sorts of systems that we talk about here (Croquet-Worlds-Frank-OMeta-whatnot).  Others will be doing their own cool things, and there will be much creativity and innovation.
However, elsewhere in this thread it is noted that the HTML-web is structured-enough to be indexable, mashupable, and so forth.  It makes me wonder: is there a risk that the searchability, etc. of the web will be degraded by the appearance of a number of mutually-incompatible better-than-HTML web technologies?  Probably not... in the worst case, someone who wants to be searchable can also publish in the "legacy" format.
However, can we do better than that?   I guess the answer depends on which aspect of the status quo we're trying to improve on (searchability, mashups, etc).  For search, there must be plenty of technologies that can improve on HTML by decoupling search-metadata from presentation/interaction (such as OpenSearch, mentioned elsewhere in this thread).  Mashups seem harder... maybe it needs to happen organically as some of the newly-possible systems find themselves converging in some areas.
But I'm not writing because I know the answers, but rather the opposite.  What do you think?
hmm... it is a mystery....
actually, possibly a relevant question here, would be why Java applets largely fell on their face, but Flash largely took off (in all its uses from YouTube to "Punch The Monkey"...).
writing browser plug-ins...
and, for browser-as-OS, what exactly will this mean, technically?...
network-based filesystem and client-side binaries and virtual processes?...
like, say, if one runs a tiny sand-boxed Unix-like system inside the browser, then push or pull binary files, which are executed, and may perform tasks?...
This isn't quite what I had in mind.  Perhaps "hypervisor" is better than "OS" to describe what I'm talking about, and I believe Alan too: a thin-as-possible platform that provides access to computing resources such as end-user I/O (mouse, multitouch, speakers, display, webcam, etc.), CPU/GPU, local persistant storage, and network.  Just enough to enable others to run OSes on top of this hypervisor.
If it tickles you fancy, then by all means use it to run a sand-boxed Unix.  Undoubtedly someone will; witness the cool hack to run Linux in the browser, accomplished by writing an x86 emulator in Javascript (http://bellard.org/jslinux/).
However, such a hypervisor will also host more ambitious OSes, for example, platforms for persistent capability-secure peer-to-peer real-time collaborative end-use-scriptable augmented-reality environments.  (again, trying to use word-associations to roughly sketch what I'm referring to, as I did earlier with "Croquet-Worlds-Frank-OMeta-whatnot").
Does this make my original question clearer?
Cheers,
Josh
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Josh Gargus
2011-06-09 18:10:26 UTC
Permalink
That all sounds very cool.

However, I don't think that it's feasible to try to ship something like this as standard in all browsers, if only for political reasons. It would be impossible to get Mozilla, Google, Apple, and Microsoft to agree on it.

That's what's cool about NaCl. It's minimal enough to be a feasible candidate for universal adoption. If it's adopted, then an ecosystem springs up with people inventing recursive exokernels to run in the browser.

Cheers,
Josh
Post by Toby Watson
How about _recursive_ VM/JITs *beneath* the level that HTML/JS is implemented.
So the "browser" that ships only supports this recursive VM.
HTML is an application of this that can be evolved by open source at
internet scale / time. Web pages can point at a specific HTML
implementation or a general redirector like google apis to get the
commonly agreed standard version.
Other containers/'plugins', Squeak, Flash, Java run as VMs, can run
their native bytecode/images but also, potentially, expose the VM
interface up again. Nesting VMs is useful also. Though you won't spare
the use-case any love, Flash video players often load multiple ads
SDKs, an arrangement that could benefit from isolation, i.e.
browser-more-like-OS.
If the top and bottom VM interfaces are the same then we can stack
them (as well as nesting them).
Base VM would have exokernel / NaCL like exposure of the native
capabilities of the device. Exokernel & FluxOS have some nifty tricks
to punch through layers so performance is not so impacted by stacking.
An intermediate VM layer could provide ISA / hardware abstraction so
that everything above that looks the same.
I re-read history of Smalltalk recently and was reminded of this from Alan,
'Bob Barton, the main designer of the B5000 and a professor at Utah
had said in one of his talks a few days earlier: "The basic principal
of recursive design is to make the parts have the same power as the
whole." For the first time I thought of the whole as the entire
computer and wondered why anyone would want to divide it up into
weaker things called data structures and procedures. Why not divide it
up into little computers, as time sharing was starting to? But not in
dozens. Why not thousands of them, each simulating a useful
structure?'
Toby
Post by Josh Gargus
Post by BGB
Post by Josh Gargus
Post by Alan Kay
Hi Cornelius
There are lots of egregiously wrong things in the web design. Perhaps one of the simplest is that the browser folks have lacked the perspective to see that the browser is not like an application, but like an OS. i.e. what it really needs to do is to take in and run foreign code (including low level code) safely and coordinate outputs to the screen (Google is just starting to realize this with NaCl after much prodding and beating.)
I think everyone can see the implications of these two perspectives and what they enable or block
Some of the implications, anyway. The benefits of the OS-perspective are clear. Once it hits its stride, there will be no (technical) barriers to deploying the sorts of systems that we talk about here (Croquet-Worlds-Frank-OMeta-whatnot). Others will be doing their own cool things, and there will be much creativity and innovation.
However, elsewhere in this thread it is noted that the HTML-web is structured-enough to be indexable, mashupable, and so forth. It makes me wonder: is there a risk that the searchability, etc. of the web will be degraded by the appearance of a number of mutually-incompatible better-than-HTML web technologies? Probably not... in the worst case, someone who wants to be searchable can also publish in the "legacy" format.
However, can we do better than that? I guess the answer depends on which aspect of the status quo we're trying to improve on (searchability, mashups, etc). For search, there must be plenty of technologies that can improve on HTML by decoupling search-metadata from presentation/interaction (such as OpenSearch, mentioned elsewhere in this thread). Mashups seem harder... maybe it needs to happen organically as some of the newly-possible systems find themselves converging in some areas.
But I'm not writing because I know the answers, but rather the opposite. What do you think?
hmm... it is a mystery....
actually, possibly a relevant question here, would be why Java applets largely fell on their face, but Flash largely took off (in all its uses from YouTube to "Punch The Monkey"...).
writing browser plug-ins...
and, for browser-as-OS, what exactly will this mean, technically?...
network-based filesystem and client-side binaries and virtual processes?...
like, say, if one runs a tiny sand-boxed Unix-like system inside the browser, then push or pull binary files, which are executed, and may perform tasks?...
This isn't quite what I had in mind. Perhaps "hypervisor" is better than "OS" to describe what I'm talking about, and I believe Alan too: a thin-as-possible platform that provides access to computing resources such as end-user I/O (mouse, multitouch, speakers, display, webcam, etc.), CPU/GPU, local persistant storage, and network. Just enough to enable others to run OSes on top of this hypervisor.
If it tickles you fancy, then by all means use it to run a sand-boxed Unix. Undoubtedly someone will; witness the cool hack to run Linux in the browser, accomplished by writing an x86 emulator in Javascript (http://bellard.org/jslinux/).
However, such a hypervisor will also host more ambitious OSes, for example, platforms for persistent capability-secure peer-to-peer real-time collaborative end-use-scriptable augmented-reality environments. (again, trying to use word-associations to roughly sketch what I'm referring to, as I did earlier with "Croquet-Worlds-Frank-OMeta-whatnot").
Does this make my original question clearer?
Cheers,
Josh
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
BGB
2011-06-09 19:06:26 UTC
Permalink
Post by Josh Gargus
That all sounds very cool.
However, I don't think that it's feasible to try to ship something like this as standard in all browsers, if only for political reasons. It would be impossible to get Mozilla, Google, Apple, and Microsoft to agree on it.
That's what's cool about NaCl. It's minimal enough to be a feasible candidate for universal adoption. If it's adopted, then an ecosystem springs up with people inventing recursive exokernels to run in the browser.
Cheers,
Josh
I don't understand though why one needs "recursive exokernels" though...

why not just "local virtual filesystems"?...

I guess there is always the issue though that if only a virtual
environment (say, x86) is provided, that about as soon as someone needs
scripting, then they will build an interpreter or JIT on top of this (or
drag in an external one, say, CPython or Lua...), meaning recursive
interpretation overhead...

a partial solution could be to provide interpreters for higher-level
bytecodes (such as maybe Java ByteCode, or another higher-level
bytecode), or higher-level script facilities (JavaScript and eval) built
right into the core API. probably also an assembler, ...

or, possibly, some "high-level" features could be implemented as
ISA-extension hacks (x86 with optional built-in dynamic typing, OO
facilities, ...). such that people are less tempted to supply their own
(and further degrade performance).

or such...
Post by Josh Gargus
Post by Toby Watson
How about _recursive_ VM/JITs *beneath* the level that HTML/JS is implemented.
So the "browser" that ships only supports this recursive VM.
HTML is an application of this that can be evolved by open source at
internet scale / time. Web pages can point at a specific HTML
implementation or a general redirector like google apis to get the
commonly agreed standard version.
Other containers/'plugins', Squeak, Flash, Java run as VMs, can run
their native bytecode/images but also, potentially, expose the VM
interface up again. Nesting VMs is useful also. Though you won't spare
the use-case any love, Flash video players often load multiple ads
SDKs, an arrangement that could benefit from isolation, i.e.
browser-more-like-OS.
If the top and bottom VM interfaces are the same then we can stack
them (as well as nesting them).
Base VM would have exokernel / NaCL like exposure of the native
capabilities of the device. Exokernel& FluxOS have some nifty tricks
to punch through layers so performance is not so impacted by stacking.
An intermediate VM layer could provide ISA / hardware abstraction so
that everything above that looks the same.
I re-read history of Smalltalk recently and was reminded of this from Alan,
'Bob Barton, the main designer of the B5000 and a professor at Utah
had said in one of his talks a few days earlier: "The basic principal
of recursive design is to make the parts have the same power as the
whole." For the first time I thought of the whole as the entire
computer and wondered why anyone would want to divide it up into
weaker things called data structures and procedures. Why not divide it
up into little computers, as time sharing was starting to? But not in
dozens. Why not thousands of them, each simulating a useful
structure?'
Toby
Post by Josh Gargus
Post by BGB
Post by Josh Gargus
Post by Alan Kay
Hi Cornelius
There are lots of egregiously wrong things in the web design. Perhaps one of the simplest is that the browser folks have lacked the perspective to see that the browser is not like an application, but like an OS. i.e. what it really needs to do is to take in and run foreign code (including low level code) safely and coordinate outputs to the screen (Google is just starting to realize this with NaCl after much prodding and beating.)
I think everyone can see the implications of these two perspectives and what they enable or block
Some of the implications, anyway. The benefits of the OS-perspective are clear. Once it hits its stride, there will be no (technical) barriers to deploying the sorts of systems that we talk about here (Croquet-Worlds-Frank-OMeta-whatnot). Others will be doing their own cool things, and there will be much creativity and innovation.
However, elsewhere in this thread it is noted that the HTML-web is structured-enough to be indexable, mashupable, and so forth. It makes me wonder: is there a risk that the searchability, etc. of the web will be degraded by the appearance of a number of mutually-incompatible better-than-HTML web technologies? Probably not... in the worst case, someone who wants to be searchable can also publish in the "legacy" format.
However, can we do better than that? I guess the answer depends on which aspect of the status quo we're trying to improve on (searchability, mashups, etc). For search, there must be plenty of technologies that can improve on HTML by decoupling search-metadata from presentation/interaction (such as OpenSearch, mentioned elsewhere in this thread). Mashups seem harder... maybe it needs to happen organically as some of the newly-possible systems find themselves converging in some areas.
But I'm not writing because I know the answers, but rather the opposite. What do you think?
hmm... it is a mystery....
actually, possibly a relevant question here, would be why Java applets largely fell on their face, but Flash largely took off (in all its uses from YouTube to "Punch The Monkey"...).
writing browser plug-ins...
and, for browser-as-OS, what exactly will this mean, technically?...
network-based filesystem and client-side binaries and virtual processes?...
like, say, if one runs a tiny sand-boxed Unix-like system inside the browser, then push or pull binary files, which are executed, and may perform tasks?...
This isn't quite what I had in mind. Perhaps "hypervisor" is better than "OS" to describe what I'm talking about, and I believe Alan too: a thin-as-possible platform that provides access to computing resources such as end-user I/O (mouse, multitouch, speakers, display, webcam, etc.), CPU/GPU, local persistant storage, and network. Just enough to enable others to run OSes on top of this hypervisor.
If it tickles you fancy, then by all means use it to run a sand-boxed Unix. Undoubtedly someone will; witness the cool hack to run Linux in the browser, accomplished by writing an x86 emulator in Javascript (http://bellard.org/jslinux/).
However, such a hypervisor will also host more ambitious OSes, for example, platforms for persistent capability-secure peer-to-peer real-time collaborative end-use-scriptable augmented-reality environments. (again, trying to use word-associations to roughly sketch what I'm referring to, as I did earlier with "Croquet-Worlds-Frank-OMeta-whatnot").
Does this make my original question clearer?
Cheers,
Josh
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Josh Gargus
2011-06-09 19:20:25 UTC
Permalink
Post by BGB
Post by Josh Gargus
That all sounds very cool.
However, I don't think that it's feasible to try to ship something like this as standard in all browsers, if only for political reasons. It would be impossible to get Mozilla, Google, Apple, and Microsoft to agree on it.
That's what's cool about NaCl. It's minimal enough to be a feasible candidate for universal adoption. If it's adopted, then an ecosystem springs up with people inventing recursive exokernels to run in the browser.
Cheers,
Josh
I don't understand though why one needs "recursive exokernels" though...
You're taking me too literally. My point is that the first goal is to get widespread adoption of something like NaCl that is good enough to host a POSIX environment, a recursive exokernel, or whatever. Once that first goal is achieved, well, "let a hundred flowers bloom".

Cheers,
Josh
BGB
2011-06-09 19:46:38 UTC
Permalink
Post by Josh Gargus
Post by BGB
Post by Josh Gargus
That all sounds very cool.
However, I don't think that it's feasible to try to ship something like this as standard in all browsers, if only for political reasons. It would be impossible to get Mozilla, Google, Apple, and Microsoft to agree on it.
That's what's cool about NaCl. It's minimal enough to be a feasible candidate for universal adoption. If it's adopted, then an ecosystem springs up with people inventing recursive exokernels to run in the browser.
Cheers,
Josh
I don't understand though why one needs "recursive exokernels" though...
You're taking me too literally. My point is that the first goal is to get widespread adoption of something like NaCl that is good enough to host a POSIX environment, a recursive exokernel, or whatever. Once that first goal is achieved, well, "let a hundred flowers bloom".
yes, ok.

FWIW, I suspect I tend to be fairly literal/concrete minded in general
(although I can still imagine lots of stuff as well...). however, I
guess I am just not very good at dealing with abstract thinking/concepts/...


but, yeah, widespread Unix-like NaCl would be cool (and personally less
off-putting than the idea of having to go do apps with all
CGI+JavaScript or by using Flash... although before I saw something
where Adobe was showing off a C -> AVM2 compiler, and demoing Quake
running on top of Flash...).

or such...
BGB
2011-06-09 18:42:12 UTC
Permalink
Post by Josh Gargus
Post by BGB
Post by Josh Gargus
Post by Alan Kay
Hi Cornelius
There are lots of egregiously wrong things in the web design. Perhaps one of the simplest is that the browser folks have lacked the perspective to see that the browser is not like an application, but like an OS. i.e. what it really needs to do is to take in and run foreign code (including low level code) safely and coordinate outputs to the screen (Google is just starting to realize this with NaCl after much prodding and beating.)
I think everyone can see the implications of these two perspectives and what they enable or block
Some of the implications, anyway. The benefits of the OS-perspective are clear. Once it hits its stride, there will be no (technical) barriers to deploying the sorts of systems that we talk about here (Croquet-Worlds-Frank-OMeta-whatnot). Others will be doing their own cool things, and there will be much creativity and innovation.
However, elsewhere in this thread it is noted that the HTML-web is structured-enough to be indexable, mashupable, and so forth. It makes me wonder: is there a risk that the searchability, etc. of the web will be degraded by the appearance of a number of mutually-incompatible better-than-HTML web technologies? Probably not... in the worst case, someone who wants to be searchable can also publish in the "legacy" format.
However, can we do better than that? I guess the answer depends on which aspect of the status quo we're trying to improve on (searchability, mashups, etc). For search, there must be plenty of technologies that can improve on HTML by decoupling search-metadata from presentation/interaction (such as OpenSearch, mentioned elsewhere in this thread). Mashups seem harder... maybe it needs to happen organically as some of the newly-possible systems find themselves converging in some areas.
But I'm not writing because I know the answers, but rather the opposite. What do you think?
hmm... it is a mystery....
actually, possibly a relevant question here, would be why Java applets largely fell on their face, but Flash largely took off (in all its uses from YouTube to "Punch The Monkey"...).
writing browser plug-ins...
and, for browser-as-OS, what exactly will this mean, technically?...
network-based filesystem and client-side binaries and virtual processes?...
like, say, if one runs a tiny sand-boxed Unix-like system inside the browser, then push or pull binary files, which are executed, and may perform tasks?...
This isn't quite what I had in mind. Perhaps "hypervisor" is better than "OS" to describe what I'm talking about, and I believe Alan too: a thin-as-possible platform that provides access to computing resources such as end-user I/O (mouse, multitouch, speakers, display, webcam, etc.), CPU/GPU, local persistant storage, and network. Just enough to enable others to run OSes on top of this hypervisor.
If it tickles you fancy, then by all means use it to run a sand-boxed Unix. Undoubtedly someone will; witness the cool hack to run Linux in the browser, accomplished by writing an x86 emulator in Javascript (http://bellard.org/jslinux/).
interesting...
less painfully slow than I would have expected from the description...

I wasn't thinking exactly like "run an emulator, run OS in emulator",
but more like, a browser plugin which looked and acted similar to a
small Unix (with processes and so on, and a POSIX-like API, and a
filesystem), but would likely be different in that it would "mount"
content from the website as part of its local filesystem (probably
read-only by default), and possibly each process could have its own
local VFS.

screen/input/... would be provided by APIs.


granted, from its description, I think NaCl may already be sort of like
this, but I haven't really messed with it.


as noted before, I wrote an x86 interpreter/emulator before, which
exposed a POSIX-like set of core APIs.

however, the "kernel" was actually just running inside the interpreter
(so "system calls" just sort of broke out of the interpreter, and were
handled directly by native code).

hence, this interpreter only ran Ring-3 code.
it could have also done Ring-0 stuff, but this would have been more
effort, and would require a more "authentic" emulation of x86 (since I
was only dealing with Ring-3, many variations existed from "true" x86,
such as the segmentation system being mostly absent, and the use of
"spans" for the MMU rather than pages or page-tables). also, real-mode
did not exist...

granted, this interpreter ran a bit slower than native code, but mostly
this was going into operations like (internal) loading/storing from
various registers (especially EAX, ECX, and EDX, IIRC...), and generally
doing memory word loads/stores (for sake of being "generic", I used
byte-for-byte loads and shifting to implement these).

initially, the "main switch" was being a big performance killer, but
then I switched partly to threaded code, which made this problem mostly
go away. in this case, threaded code means that operations were handled
by calling directly through function pointers.
there was effectively a cache of pre-decoded instructions (a big hash
table holding structs), each with their own function pointers
(instruction handlers). any detected SMC (self-modifying-code) worked by
simply flushing the entire hash.

likely, so micro-optimization could be done, such as handling
threaded-code more efficiently, or avoiding internal switches for
register loads/stores (for example, DWORD registers being loaded/stored
directly by an index number, ...). but, it worked at the time...

more aggressively, one could JIT to native code (IOW: "dynamic
translation"...).
Post by Josh Gargus
However, such a hypervisor will also host more ambitious OSes, for example, platforms for persistent capability-secure peer-to-peer real-time collaborative end-use-scriptable augmented-reality environments. (again, trying to use word-associations to roughly sketch what I'm referring to, as I did earlier with "Croquet-Worlds-Frank-OMeta-whatnot").
Does this make my original question clearer?
ok.

what exactly this would be like is less obvious.
I personally have a much easier time imagining what "Unix in a browser"
would look like.


with just a plain OS in the browser though, one could run apps...

then one could have 3D mostly by having this virtual OS expose OpenGL
(or GL ES).

possibly, for sake of simplicity, the "app" could always use OpenGL,
just its "text mode" would just be using OpenGL to draw all the characters.

then maybe some special API calls for handling input, and "enabling" GL
(disabling drawing the console UI).


I before wondered about the problem of what to do about client-program
memory use, but it seems like there is a nifty solution: if a limit is
exceeded, allocation calls fail (say, each process is limited to a
certain amount of memory).

possibly, a given "app" is also limited to a certain maximum number of
child processes, at which point "fork()" calls will fail (or send out a
"SIGKILL" or similar to all processes belonging to the parent app).


or such...
Josh Gargus
2011-06-09 19:15:57 UTC
Permalink
Post by BGB
Post by Josh Gargus
OSes on top of this hypervisor.
If it tickles you fancy, then by all means use it to run a sand-boxed Unix. Undoubtedly someone will; witness the cool hack to run Linux in the browser, accomplished by writing an x86 emulator in Javascript (http://bellard.org/jslinux/).
interesting...
less painfully slow than I would have expected from the description...
I wasn't thinking exactly like "run an emulator, run OS in emulator", but more like, a browser plugin which looked and acted similar to a small Unix (with processes and so on, and a POSIX-like API, and a filesystem), but would likely be different in that it would "mount" content from the website as part of its local filesystem (probably read-only by default), and possibly each process could have its own local VFS.
I know. I was just saying that if someone has written a Javascript x86 emulator to run Linux in the browser, that it's a near-certainty that someone will eventually use NaCl to host a POSIX-like environment in the browser.

Cheers,
Josh
Chris Warburton
2011-06-10 14:33:17 UTC
Permalink
Post by BGB
interesting...
less painfully slow than I would have expected from the description...
I wasn't thinking exactly like "run an emulator, run OS in emulator",
but more like, a browser plugin which looked and acted similar to a
small Unix (with processes and so on, and a POSIX-like API, and a
filesystem), but would likely be different in that it would "mount"
content from the website as part of its local filesystem (probably
read-only by default), and possibly each process could have its own
local VFS.
screen/input/... would be provided by APIs.
<snip>
Post by BGB
Post by Josh Gargus
However, such a hypervisor will also host more ambitious OSes, for
example, platforms for persistent capability-secure peer-to-peer
real-time collaborative end-use-scriptable augmented-reality
environments. (again, trying to use word-associations to roughly
sketch what I'm referring to, as I did earlier with
"Croquet-Worlds-Frank-OMeta-whatnot").
Does this make my original question clearer?
ok.
what exactly this would be like is less obvious.
I personally have a much easier time imagining what "Unix in a browser"
would look like.
with just a plain OS in the browser though, one could run apps...
then one could have 3D mostly by having this virtual OS expose OpenGL
(or GL ES).
possibly, for sake of simplicity, the "app" could always use OpenGL,
just its "text mode" would just be using OpenGL to draw all the characters.
then maybe some special API calls for handling input, and "enabling" GL
(disabling drawing the console UI).
I before wondered about the problem of what to do about client-program
memory use, but it seems like there is a nifty solution: if a limit is
exceeded, allocation calls fail (say, each process is limited to a
certain amount of memory).
possibly, a given "app" is also limited to a certain maximum number of
child processes, at which point "fork()" calls will fail (or send out a
"SIGKILL" or similar to all processes belonging to the parent app).
or such...
I've been following this discussion and it seems there are a lot of very
interesting ideas floating around, but I'm afraid I'm finding a lot of
it to be getting rather bogged down in details like plugins, processes,
etc. Forgive me if I'm missing something fundamental here, but to me
Alan's contrast of "browser as application" vs "browser as OS" can be
roughly translated to "the browser is the 'real' application, the pages
are just data it reads" vs "the pages are the 'real' applications, the
browser just implements the lower levels of the stack".

The latter viewpoint is gradually taking over now, where those "lower
levels" are currently CPU (Javascript engine), storage (cookies, HTML5
local SQL, ...), networking (XMLHttpRequest, WebSockets, ...), display
(DOM, canvas, WebGL, SVG, ...), IO interrupts (events) and so on.

Can I ask how this is not an OS?

Regards,
Chris Warburton
BGB
2011-06-10 18:09:13 UTC
Permalink
Post by Chris Warburton
Post by BGB
interesting...
less painfully slow than I would have expected from the description...
I wasn't thinking exactly like "run an emulator, run OS in emulator",
but more like, a browser plugin which looked and acted similar to a
small Unix (with processes and so on, and a POSIX-like API, and a
filesystem), but would likely be different in that it would "mount"
content from the website as part of its local filesystem (probably
read-only by default), and possibly each process could have its own
local VFS.
screen/input/... would be provided by APIs.
<snip>
Post by BGB
Post by Josh Gargus
However, such a hypervisor will also host more ambitious OSes, for
example, platforms for persistent capability-secure peer-to-peer
real-time collaborative end-use-scriptable augmented-reality
environments. (again, trying to use word-associations to roughly
sketch what I'm referring to, as I did earlier with
"Croquet-Worlds-Frank-OMeta-whatnot").
Does this make my original question clearer?
ok.
what exactly this would be like is less obvious.
I personally have a much easier time imagining what "Unix in a browser"
would look like.
with just a plain OS in the browser though, one could run apps...
then one could have 3D mostly by having this virtual OS expose OpenGL
(or GL ES).
possibly, for sake of simplicity, the "app" could always use OpenGL,
just its "text mode" would just be using OpenGL to draw all the characters.
then maybe some special API calls for handling input, and "enabling" GL
(disabling drawing the console UI).
I before wondered about the problem of what to do about client-program
memory use, but it seems like there is a nifty solution: if a limit is
exceeded, allocation calls fail (say, each process is limited to a
certain amount of memory).
possibly, a given "app" is also limited to a certain maximum number of
child processes, at which point "fork()" calls will fail (or send out a
"SIGKILL" or similar to all processes belonging to the parent app).
or such...
I've been following this discussion and it seems there are a lot of very
interesting ideas floating around, but I'm afraid I'm finding a lot of
it to be getting rather bogged down in details like plugins, processes,
etc. Forgive me if I'm missing something fundamental here, but to me
Alan's contrast of "browser as application" vs "browser as OS" can be
roughly translated to "the browser is the 'real' application, the pages
are just data it reads" vs "the pages are the 'real' applications, the
browser just implements the lower levels of the stack".
The latter viewpoint is gradually taking over now, where those "lower
levels" are currently CPU (Javascript engine), storage (cookies, HTML5
local SQL, ...), networking (XMLHttpRequest, WebSockets, ...), display
(DOM, canvas, WebGL, SVG, ...), IO interrupts (events) and so on.
Can I ask how this is not an OS?
errm...

I don't know...


as for differences from an OS:
it doesn't boot up with the hardware;
it doesn't have an obvious kernel (say, a core which runs the processor
in Ring-0 or similar);
it doesn't provide process management or system calls (except maybe
builtin functions in JavaScript?...);
...

there is not a whole lot that seems in common between a browser and an OS.

yes, there is Chrome OS, but I sort of suspect this will (probably) fall
on its face (vs... say... installing real Linux on the netbooks...).


there is a lot more in common with it being an application:
it resides in "Program Files" or "/usr/bin" or similar;
it starts up after the main OS, and usually by direct action of the user;
it "lives" within a window, or multiple windows (it itself provides the UI);
...

it could be compared to a "platform" though, say similar to .NET or the
JDK, which both have OS-like and application-like aspects:
they run on top of the underlying OS;
they sit around in the background (not generally directly visible to users);
they launch other programs on top of themselves;
...


otherwise, one would get into issues like, say, "how is a modern game
engine, such as Steam / Source, not an OS?..." (yes, Steam and Source
are technically separate product-wise, but share much of the same
underlying architecture, and targeting apps to Steam often involves
summoning up Source as a sort of a slave backend, such as for VFS and
graphics/user-input services). when Source is being used as a game, it
may summon up maps, which may have a lot going on internally, and may
use mods, many of which used scripts (often Lua), ...

however, IMO, they are not really an OS, more 'platforms'...

and, many people would likely find the idea of booting a computer
directly into the Source Engine to be a little silly... (start
computer... start playing Portal 2...).

Steam could make a little more sense, since Steam is basically a
glorified program downloader and launcher (and, interestingly, does some
of its content presentation via HTML, IIRC).

"Valve Steam OS, coming soon..." but, then this would likely behave more
like a hybrid of the XBox360 or PS3?...


my own personal game-engine project also includes:
VFS / filesystem facilities (mounting/unmounting things, several FS
"drivers", ...);
ability to launch/execute "program-like" scripts;
compiler facilities (including a C compiler, also the scripting VM);
a text editing component (Notepad-like);
an internal shell/console interface (Bash-like normally, also can
directly evaluate script fragments);
GUI widget facilities (actually, IIRC, the GUI still supports a
zooming/panning UI as well, but for typical uses fixed-place UI elements
have been more useful);
...

but it is not really an OS either IMO...
maybe an application, maybe eventually a platform...

it would be silly/strange though if every application with a
VFS/shell/compiler/program-launcher/... were branded an OS.

"3DS Max... the OS..."


or something...
Max OrHai
2011-06-10 19:45:36 UTC
Permalink
On Fri, Jun 10, 2011 at 11:09 AM, BGB <***@gmail.com> wrote:
< snip ... >
Post by BGB
there is not a whole lot that seems in common between a browser and an OS.
yes, there is Chrome OS, but I sort of suspect this will (probably) fall on
its face (vs... say... installing real Linux on the netbooks...).
BGB, you're being waay too literal-minded! This thread was (I thought) about
architecture, rather than implementation details of current technologies.

Chrome OS is a case in point, and FWIW, I expect it to succeed, maybe even
beyond Android, because it's been carefully built to give a seamless,
painless end-user experience. That's what most people want. Almost everyone
who casually uses a computer day-to-day doesn't give a damn about how
"powerful" or configurable it is. They just want it to work, get out of
their way, and not irritate them unnecessarily. Increasingly, most people
spend most of their computer time in a browser anyway. For quite a few, that
is (or easily could be) *all* of their time. Chrome OS just trims away
several layers of what these users would consider pointless complexity. As
others here have mentioned, the Web has *already* become the de-facto
universal communications medium.

The interesting question to me is, how do we help ordinary people (like, you
know, children) *use* this powerful new medium to learn, experiment, express
and communicate powerful ideas? As far as this question is concerned Chrome
OS and the Lively Kernel bring us back up to almost the level of Smalltalk
(plus or minus some semantic noise from Javascript, but hey). Surely we can
do better...

-- Max
BGB
2011-06-10 20:19:52 UTC
Permalink
Post by Max OrHai
< snip ... >
there is not a whole lot that seems in common between a browser and an OS.
yes, there is Chrome OS, but I sort of suspect this will
(probably) fall on its face (vs... say... installing real Linux on
the netbooks...).
BGB, you're being waay too literal-minded! This thread was (I thought)
about architecture, rather than implementation details of current
technologies.
ermm... I think this is my natural tendency, and may have to do some
with psychology...
http://www.personalitypage.com/portraits.html
http://www.personalitypage.com/html/ESTP.html
Post by Max OrHai
Chrome OS is a case in point, and FWIW, I expect it to succeed, maybe
even beyond Android, because it's been carefully built to give a
seamless, painless end-user experience. That's what most people want.
Almost everyone who casually uses a computer day-to-day doesn't give a
damn about how "powerful" or configurable it is. They just want it to
work, get out of their way, and not irritate them unnecessarily.
Increasingly, most people spend most of their computer time in a
browser anyway. For quite a few, that is (or easily could be) /all/ of
their time. Chrome OS just trims away several layers of what these
users would consider pointless complexity. As others here have
mentioned, the Web has /already/ become the de-facto universal
communications medium.
dunno...

I got a netbook before, and it came with Xandros...
I was not very impressed, and soon enough ended up replacing it with
Ubuntu...

I actually spend a lot more of my time in the shell though (usually
either Bash or CMD...).
Post by Max OrHai
The interesting question to me is, how do we help ordinary people
(like, you know, children) /use/ this powerful new medium to learn,
experiment, express and communicate powerful ideas? As far as this
question is concerned Chrome OS and the Lively Kernel bring us back up
to almost the level of Smalltalk (plus or minus some semantic noise
from Javascript, but hey). Surely we can do better...
dunno about kids now...

when I got started, it was mostly with MS-DOS and QBasic... (and, there
was Win 3.x and Win 95, but generally there wasn't nearly as much
"interesting"/"relevant" in Windows at the time, as most of the "cool
stuff" was in DOS, and if one tried using it from Windows their computer
would generally crash anyways...).

mostly, it all started out as lots of fiddling with stuff...

most of this was in the days where internet was dial-up and generally
exclusive to the computer owned by ones' dad...

oh yays, things were much better with later getting Ethernet in the house...


later, I migrated to C (first TurboC, later DJGPP), and following this,
spent a number of years using Linux (I mostly skipped over Win98 for
being "teh suck"...).

ended up migrating mostly back to Windows with Win2K and WinXP though,
and have been mostly back in Windows land since (mostly for sake of
better driver support and more availability of games...).

my recent discovery of being able to use VMware to run Linux rather than
dual-booting, and VMware being a lot more convinient (albeit the lack of
HW acceleration in VMware is lame...).


or such...
Max OrHai
2011-06-10 20:44:32 UTC
Permalink
Well, INTP here, so at least we have *some* common ground.

For me it was:
Apple II BASIC -->
"Classic" Macintosh with HyperCard -->
BASH / C / Python on Linux -->
disgusted with computers entirely and more or less Luddite for
about 5 years -->
Blackberry OS on my phone -->
Smalltalk, Scheme, Haskell, and JavaScript/HTML on
slightly better Linuxes -->
Same stuff on Mac OS X, mostly. Also, NetLogo.

I write this from work, where I'm juggling Fedora and WinXP, neither of
which quite "just work" for the fairly simple tasks I expect of them.

-- Max
Post by Max OrHai
< snip ... >
Post by BGB
there is not a whole lot that seems in common between a browser and an OS.
yes, there is Chrome OS, but I sort of suspect this will (probably) fall
on its face (vs... say... installing real Linux on the netbooks...).
BGB, you're being waay too literal-minded! This thread was (I thought)
about architecture, rather than implementation details of current
technologies.
ermm... I think this is my natural tendency, and may have to do some with
psychology...
http://www.personalitypage.com/portraits.html
http://www.personalitypage.com/html/ESTP.html
Chrome OS is a case in point, and FWIW, I expect it to succeed, maybe
even beyond Android, because it's been carefully built to give a seamless,
painless end-user experience. That's what most people want. Almost everyone
who casually uses a computer day-to-day doesn't give a damn about how
"powerful" or configurable it is. They just want it to work, get out of
their way, and not irritate them unnecessarily. Increasingly, most people
spend most of their computer time in a browser anyway. For quite a few, that
is (or easily could be) *all* of their time. Chrome OS just trims away
several layers of what these users would consider pointless complexity. As
others here have mentioned, the Web has *already* become the de-facto
universal communications medium.
dunno...
I got a netbook before, and it came with Xandros...
I was not very impressed, and soon enough ended up replacing it with
Ubuntu...
I actually spend a lot more of my time in the shell though (usually either
Bash or CMD...).
The interesting question to me is, how do we help ordinary people (like,
you know, children) *use* this powerful new medium to learn, experiment,
express and communicate powerful ideas? As far as this question is concerned
Chrome OS and the Lively Kernel bring us back up to almost the level of
Smalltalk (plus or minus some semantic noise from Javascript, but hey).
Surely we can do better...
dunno about kids now...
when I got started, it was mostly with MS-DOS and QBasic... (and, there was
Win 3.x and Win 95, but generally there wasn't nearly as much
"interesting"/"relevant" in Windows at the time, as most of the "cool stuff"
was in DOS, and if one tried using it from Windows their computer would
generally crash anyways...).
mostly, it all started out as lots of fiddling with stuff...
most of this was in the days where internet was dial-up and generally
exclusive to the computer owned by ones' dad...
oh yays, things were much better with later getting Ethernet in the house...
later, I migrated to C (first TurboC, later DJGPP), and following this,
spent a number of years using Linux (I mostly skipped over Win98 for being
"teh suck"...).
ended up migrating mostly back to Windows with Win2K and WinXP though, and
have been mostly back in Windows land since (mostly for sake of better
driver support and more availability of games...).
my recent discovery of being able to use VMware to run Linux rather than
dual-booting, and VMware being a lot more convinient (albeit the lack of HW
acceleration in VMware is lame...).
or such...
BGB
2011-06-10 22:44:16 UTC
Permalink
(sorry, I don't know if this belongs on-list or not...).
Well, INTP here, so at least we have /some/ common ground.
yeah... I think I generally get along well enough with most people, in
general...

well, except "Q's", which are basically people who act sort of like Q
from "Star Trek" and generally start being all condescending about how
"stupid" I supposedly am, ...

well, and females... things are just not generally prone to go well in
this area... nothing immoral, mostly just they just tend to either be
scared off, or things quickly get really awkward, so things tend to go
nowhere...

well, me and "working with people", sadly, doesn't usually go well...
although I guess, as great as the idea of "people working together for a
common goal and a common good" may seem, cooperative projects soon turn
into lots of arguing and people stepping all over each other.

so, it has often usually just been me by myself, mostly doing my own
thing...
Apple II BASIC -->
"Classic" Macintosh with HyperCard -->
BASH / C / Python on Linux -->
disgusted with computers entirely and more or less Luddite
for about 5 years -->
Blackberry OS on my phone -->
Smalltalk, Scheme, Haskell, and JavaScript/HTML on
slightly better Linuxes -->
Same stuff on Mac OS X, mostly. Also, NetLogo.
I write this from work, where I'm juggling Fedora and WinXP, neither
of which quite "just work" for the fairly simple tasks I expect of them.
yeah...

well, as noted before:
MS-DOS, QBasic, ...

I also ran Win 3.11 and Win32s, IIRC because I think for some reason I
didn't really like Win95, and Win32s ran many Win95 apps.

later on, I jumped ship to Linux, which generally forced a complete
switch over to C for coding (I could no longer use QBasic, being it was
Linux and all...).


later on, came across Scheme (late 90s), and at first used Guile, and
then ended up doing my own implementation, partly because at the time
Guile did stuff that I personally found really annoying (generally, it
was hard-coded to call "abort()" at the first sign of trouble, ...).

by a few years later (2003), this had turned unmaintainable (complex and
nasty code), and so I dropped the VM and most code which was built
around it. at the time, I had figured "well, I am just going to write
crap in plain C...".

then later (maybe at most a few months) was like, "doing everything in
plain C is lame..."

at first, I implemented a PostScript variant, then realized that it was
unusably terrible (trying to write code in PS, "OMG this sucks...").
basically, tokens were parsed and converted fairly directly into bytecode.

I also ran across JavaScript, and was like "oh wow, this is cool...".
so, I threw together a few things, and then had my own makeshift
JavaScript imitation (I first called it "PDScript", but later renamed it
"BGBScript"...). this was in early 2004.

actually, parts that went into it originally:
most of the machinery from the original PostScript interpreter (this
formed the lower levels);
a lot of XML-processing code (a DOM-like system);
a recursive-descent parser, doing a JS-like syntax (parsing directly
into XML nodes).

so, basically: BGBScript -> XML -> PostScript (sort of...)
the GC was also conservative mark/sweep with raw pointers.

it was also "teh slow" and spewed huge amounts of garbage, which was not
good with a slow conservative GC.


a later partial rewrite (in 2006) re-incorporated a number of parts from
the original Scheme VM (and made a number of language-level changes),
and switched over to using S-Expressions as the internal representation
for the ASTs, as well as re-using a variant of the Scheme-VM's GC
(precise mark/sweep with reference-counting and tagged references).
the 2006 VM also had a working JIT.

in 2007, a C compiler was written, which switched back to XML for the
ASTs (it was built more from code from the 2004 BGBScript
implementation). the initial motivation was mostly that
dynamically-compiled C could probably interface much better with native
C. but, the compiler was very slow and buggy...

in 2008, BGBScript was partly rewritten again, mostly switching back to
the other GC (conservative mark/sweep), mostly as the precise-GC was
painful to work with. sadly, this broke the JIT, and made it all a bit
slower, and I have not (as of yet) fixed it (the interpreter is fast
enough...). stuck with S-Expressions for the ASTs as well.

and, in early 2010, added a bunch of spiffy new FFI stuff (mostly to
eliminate most of the boilerplate needed to call into C code...). the
FFI is itself partly built on parts of the C compiler though.

late 2010/early 2011, tried to make a spiffy new VM and a new language
(BGBScript2), but this effort fell on its face (too much effort and not
getting working fast enough), and I later just reincorporated a lot of
the planned features back into BGBScript.

recently, have mostly been adding misc features, fixing bugs, ...


but, beyond this, I do a fair amount of other stuff (3D stuff).

sadly, none of it is terribly compelling, and none of this provides a
source of income (eventually needing a job is an issue, I don't know if
anyone will pay me for these sorts of things...).


or such...
Craig Latta
2011-06-10 22:00:15 UTC
Permalink
Post by Chris Warburton
Can I ask how this is not an OS?
Operating systems have more entertaining failure modes... if a
really bad crash can render the hardware unbootable, it's an operating
system. :)


-C

--
Craig Latta
www.netjam.org/resume
+31 6 2757 7177
+ 1 415 287 3547
Casey Ransberger
2011-06-10 22:44:15 UTC
Permalink
Hahaha, this is it exactly!

Perpendicular, but a poignant friend/mentor of mine said "real software engineering hasn't emerged because there aren't enough people dying yet."

He said that after I made my bid on what the difference is. My angle was: the difference between software and engineering is just that when the real bridge you designed falls over with people on it, you probably won't work again, whereas in software we just apologize to the users and just ship a nice hotfix for them.

In any event it seems that he and I agree that the difference is usually one of consequence, or lack thereof.

I must tip my hat, however, to Alan's argument that we haven't even found our arches yet. This just resonates with me, especially after stomaching all of this best-practice-as-religion crap in industry; I really want more evidence that we haven't got a clue what we're doing yet, because it would be lovely to dispel the myth that we do.
Post by Craig Latta
Post by Chris Warburton
Can I ask how this is not an OS?
Operating systems have more entertaining failure modes... if a
really bad crash can render the hardware unbootable, it's an operating
system. :)
-C
--
Craig Latta
www.netjam.org/resume
+31 6 2757 7177
+ 1 415 287 3547
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Bert Freudenberg
2011-06-09 16:31:51 UTC
Permalink
"NaCl" is not a perfect solution, if anything, because, say, x86 NaCl apps don't work on x86-64 or ARM. nicer would be able to be able to run it natively, if possible, or JIT it to the native ISA if not.
Work is underway on "PNaCl" which uses LLVM bitcode files as platform-independent format for executables:

http://www.chromium.org/nativeclient/pnacl/

- Bert -
Julian Leviston
2011-06-09 09:56:14 UTC
Permalink
Post by Josh Gargus
However, can we do better than that? I guess the answer depends on which aspect of the status quo we're trying to improve on (searchability, mashups, etc). For search, there must be plenty of technologies that can improve on HTML by decoupling search-metadata from presentation/interaction (such as OpenSearch, mentioned elsewhere in this thread). Mashups seem harder... maybe it needs to happen organically as some of the newly-possible systems find themselves converging in some areas.
But I'm not writing because I know the answers, but rather the opposite. What do you think?
I'm left wondering about the Adapter design pattern... could it be adapted to apply here? Can we take OMeta, which is basically the "adapter pattern to end all adapter patterns" and apply it to the web, and end up with two inter-communicable network protocols?

Julian.
Cornelius Toole
2011-06-09 16:26:55 UTC
Permalink
Some of the implications, anyway. The benefits of the OS-perspective are
Post by Josh Gargus
clear. Once it hits its stride, there will be no (technical) barriers to
deploying the sorts of systems that we talk about here
(Croquet-Worlds-Frank-OMeta-whatnot). Others will be doing their own cool
things, and there will be much creativity and innovation.
However, elsewhere in this thread it is noted that the HTML-web is
structured-enough to be indexable, mashupable, and so forth. It makes me
wonder: is there a risk that the searchability, etc. of the web will be
degraded by the appearance of a number of mutually-incompatible
better-than-HTML web technologies? Probably not... in the worst case,
someone who wants to be searchable can also publish in the "legacy" format.
Will the web be degraded by the appearance (or should I say proliferation)
of mutually-incompatible, but better than HTML technologies?

First off I would ask better in what way? I think the from a user experience
POV, I think when people say 'better' they mean richer interactively, which
implies better graphical capabilities, access to special hardware (e.g.
camera, mic, accelerometer, GPS, GPUs, GPGPU, etc.), faster startup,
robustness against network failure and so on.

Up until now and may be for some time into the future, the tradeoff of the
web as computing platform versus OS-native ones has been about generality
versus optimizability as enabled by resource specialization(or some such
related thing). Some use cases map well to the general, others not. Only
within the last 3 years have we seen mass-market deployment and use of
Internet-scale software not entirely based on HTML/JS/CSS client
technologies. This has been mostly in the form of native mobile apps. But
these are still web apps, many of them still use the Web as connector (e.g.
HTTP), but the UI is realized using OS-native frameworks. And so what we
often lose is data transparency and portability. For instance, the Our
Choice interactive book app on iOS looks and feels great, but its worse than
the web in that I cannot even copy text from it. It, like many of the
non-ePub digital publications, is just an archive of images and audio-video
content pre-baked into a handful of layouts.

It's not that non-HTML client technologies degrade the web in and of itself,
take PDF for instance. Many PDF documents are linkable and searchable on the
web. But this is because software to read PDF is widely deployed, which was
enabled by a widespread access to the PDF standard. I think we can mitigate
the opacity introduced by non-HTML client technologies if we expand the ways
in which we implement links. Imagine encapsulating a reference to the
computation (or its type) that would resolve a less-transparent data format.

Probably not... in the worst case, someone who wants to be searchable can
Post by Josh Gargus
also publish in the "legacy" format.
The 'legacy' format is the point. I would say that the web isn't 'legacy',
but what makes legacy systems visible. If the Internet is a world of many
diverse islands of computational and network resources, the Web architecture
define languages for these islands to communicate.

The issues concerning web client UI rendering technologies are orthogonal to
other fundamental issues of the Web architecture.

I think what I've been really trying to get at with my initial question is
this. If the goal of the web architecture is for connecting resources, the
current architecture does well at connecting data, but not computation, not
at scale. Perhaps a theme of the developments around HTML5 is evolving the
Web architecture to better support connecting applications. But because the
Web was designed for exchanging representations of application state
(basically large-grained data), so many applications won't fit this model.
Imagine trying to run a high-frequency equity trading network atop the FedEx
air freight network, or worse the US Postal Service (or chose your local
postal service). Add to the fact of a client-server hierarchy and now you
have to deal with bottlenecks at those endpoints. Many web-based
applications are designed around this bottleneck, and so I see us running
into conceptual and structural scaling issues.
Post by Josh Gargus
Hi Cornelius
There are lots of egregiously wrong things in the web design. Perhaps one
of the simplest is that the browser folks have lacked the perspective to see
that the browser is not like an application, but like an OS. i.e. what it
really needs to do is to take in and run foreign code (including low level
code) safely and coordinate outputs to the screen (Google is just starting
to realize this with NaCl after much prodding and beating.)
I think everyone can see the implications of these two perspectives and
what they enable or block
Some of the implications, anyway. The benefits of the OS-perspective are
clear. Once it hits its stride, there will be no (technical) barriers to
deploying the sorts of systems that we talk about here
(Croquet-Worlds-Frank-OMeta-whatnot). Others will be doing their own cool
things, and there will be much creativity and innovation.
However, elsewhere in this thread it is noted that the HTML-web is
structured-enough to be indexable, mashupable, and so forth. It makes me
wonder: is there a risk that the searchability, etc. of the web will be
degraded by the appearance of a number of mutually-incompatible
better-than-HTML web technologies? Probably not... in the worst case,
someone who wants to be searchable can also publish in the "legacy" format.
However, can we do better than that? I guess the answer depends on which
aspect of the status quo we're trying to improve on (searchability, mashups,
etc). For search, there must be plenty of technologies that can improve on
HTML by decoupling search-metadata from presentation/interaction (such as
OpenSearch, mentioned elsewhere in this thread). Mashups seem harder...
maybe it needs to happen organically as some of the newly-possible systems
find themselves converging in some areas.
But I'm not writing because I know the answers, but rather the opposite.
What do you think?
Cheers,
Josh
Cheers,
Alan
------------------------------
*Sent:* Tue, May 31, 2011 7:16:20 AM
*Subject:* Re: [fonc] Alternative Web programming models?
Thanks Merik,
I've read/watch the OOPSLA'97 keynote before, but hadn't seen the first video.
I'm having problems with the first one(the talk at UIUC). Has anyone been
able to watch past the first hour. I get up to the point where Alex speaks
and it freezes.
I've just recently read Roy Fielding's dissertation on the architecture of
the Web. Two prominent features of web architecture are the (1)
client-server hierarchical style and (2) the layering abstraction style. My
take away from that is how all of abstraction layers of the web software
stack get in the way of the applications that want to use the machine. Style
1 is counter to the notion of the 'no centers' principle and is very
limiting when you consider different classes of applications that might
involve many entities with ill-defined relationships. Style 2, provides for
separation of concerns and supports integration with legacy systems, but
incurs so much overhead in terms of structural complexity and performance. I
think the stuff about web sockets and what was discussed in the Erlang
interview that Micheal linked to in the 1st reply is relevant here. The web
was designed for large grain interaction between entities, but many
application domain problems don't map to that. Some people just want pipes
or channels to exchange messages for fine-grained interactions, but the
layer cake doesn't allow it. This is where you get the feeling that the
architecture for rich web apps is no-architecture, just piling big stones
atop one another.
I think it would be very interesting for someone to take the same approach
to networked-based application as Gezira did with graphics (or the STEP
project in general) as far assessing what's needed in a modern
Internet-scale hypermedia architecture.
Dr Alan Kay addressed the html design a number of times in his lectures
[1] Alan Kay, How Complex is "Personal Computing"?". Normal" Considered
Harmful. October 22, 2009, Computer Science department at UIUC.
http://media.cs.uiuc.edu/seminars/StateFarm-Kay-2009-10-22b.asx
(also see http://www.smalltalk.org.br/movies/ )
[2] Alan Kay, "The Computer Revolution Hasn't Happened Yet", October 7,
1997, OOPSLA'97 Keynote.
Transcript
http://blog.moryton.net/2007/12/computer-revolution-hasnt-happened-yet.html
Video
http://ftp.squeak.org/Media/AlanKay/Alan%20Kay%20at%20OOPSLA%201997%20-%20The%20computer%20revolution%20hasnt%20happened%20yet.avi
(also see http://www.smalltalk.org.br/movies/ )
Merik
All,
A criticism by Dr. Kay, has really stuck with me. I can't remember the
specific criticism and where it's from, but I recall it being about the how
wrong the web programming model is. I imagine he was referring to how
disjointed, resource inefficient it is and how it only exposes a fraction of
the power and capability inherent in the average personal computer.
So Alan, anyone else,
what's wrong with the web programming mode and application architecture?
What programming model would work for a global-scale hypermedia system? What
prior research or commercial systems have any of these properties?
The web is about the closest we've seen to a ubiquitous deployment
platform for software, but the confluence of market forces and technical
realities endanger that ubiquity because users want full power of their
devices plus the availability of Internet connectivity.
-Cornelius
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
cornelius toole, jr. | ***@tigers.lsu.edu | mobile: 601.212.3045
Josh Gargus
2011-06-09 18:01:07 UTC
Permalink
Post by Josh Gargus
Some of the implications, anyway. The benefits of the OS-perspective are clear. Once it hits its stride, there will be no (technical) barriers to deploying the sorts of systems that we talk about here (Croquet-Worlds-Frank-OMeta-whatnot). Others will be doing their own cool things, and there will be much creativity and innovation.
However, elsewhere in this thread it is noted that the HTML-web is structured-enough to be indexable, mashupable, and so forth. It makes me wonder: is there a risk that the searchability, etc. of the web will be degraded by the appearance of a number of mutually-incompatible better-than-HTML web technologies? Probably not... in the worst case, someone who wants to be searchable can also publish in the "legacy" format.
Will the web be degraded by the appearance (or should I say proliferation) of mutually-incompatible, but better than HTML technologies?
First off I would ask better in what way?
Better in every way! Better languages, better render-target and interaction-model than provided by the DOM, better models for distributed computation.

It appears that I was assuming more shared context on this list than actually exists. I'll try to fine-tune my question below.
Post by Josh Gargus
I think the from a user experience POV, I think when people say 'better' they mean richer interactively, which implies better graphical capabilities, access to special hardware (e.g. camera, mic, accelerometer, GPS, GPUs, GPGPU, etc.), faster startup, robustness against network failure and so on.
Sure, all of these.
Post by Josh Gargus
Up until now and may be for some time into the future, the tradeoff of the web as computing platform versus OS-native ones has been about generality versus optimizability as enabled by resource specialization(or some such related thing). Some use cases map well to the general, others not. Only within the last 3 years have we seen mass-market deployment and use of Internet-scale software not entirely based on HTML/JS/CSS client technologies. This has been mostly in the form of native mobile apps. But these are still web apps, many of them still use the Web as connector (e.g. HTTP), but the UI is realized using OS-native frameworks. And so what we often lose is data transparency and portability. For instance, the Our Choice interactive book app on iOS looks and feels great, but its worse than the web in that I cannot even copy text from it. It, like many of the non-ePub digital publications, is just an archive of images and audio-video content pre-baked into a handful of layouts.
Right, this loss of transparency and portability is precisely the type of downside I'm envisioning when people start deploying new "OSes" on the browser "hypervisor" (using these terms as I defined them in my previous email).
Post by Josh Gargus
It's not that non-HTML client technologies degrade the web in and of itself, take PDF for instance.
I didn't mean to imply this. We're on the same page here.
Post by Josh Gargus
Many PDF documents are linkable and searchable on the web. But this is because software to read PDF is widely deployed, which was enabled by a widespread access to the PDF standard. I think we can mitigate the opacity introduced by non-HTML client technologies if we expand the ways in which we implement links. Imagine encapsulating a reference to the computation (or its type) that would resolve a less-transparent data format.
I'm not sure I understand your last sentence, nor how you suggest we might mitigate the opacity of non-HTML client technologies. Let's say that you embed in an HTML page a view into a persistent 3d virtual environment like OpenQwaq. Can you help me understand how we might expand the ways in which we implement links to encompass the rich, persistent, dynamic content in such an environment? (this is basically my original question in a more concrete context)
Post by Josh Gargus
Probably not... in the worst case, someone who wants to be searchable can also publish in the "legacy" format.
The 'legacy' format is the point. I would say that the web isn't 'legacy',
I used quotes to indicate that I was using the term as a shorthand label, rather than as descriptive.
Post by Josh Gargus
but what makes legacy systems visible. If the Internet is a world of many diverse islands of computational and network resources, the Web architecture define languages for these islands to communicate.
The issues concerning web client UI rendering technologies are orthogonal to other fundamental issues of the Web architecture.
Conceptually, yes. In practice, no, because the HTML/DOM render-target is also the lingua franca that makes the Web searchable and mashupable.
Post by Josh Gargus
I think what I've been really trying to get at with my initial question is this. If the goal of the web architecture is for connecting resources, the current architecture does well at connecting data, but not computation, not at scale. Perhaps a theme of the developments around HTML5 is evolving the Web architecture to better support connecting applications. But because the Web was designed for exchanging representations of application state (basically large-grained data), so many applications won't fit this model. Imagine trying to run a high-frequency equity trading network atop the FedEx air freight network, or worse the US Postal Service (or chose your local postal service). Add to the fact of a client-server hierarchy and now you have to deal with bottlenecks at those endpoints. Many web-based applications are designed around this bottleneck, and so I see us running into conceptual and structural scaling issues.
Agreed. It appears that I was re-asking a variant of your original question.

Cheers,
Josh
Post by Josh Gargus
Post by Alan Kay
Hi Cornelius
There are lots of egregiously wrong things in the web design. Perhaps one of the simplest is that the browser folks have lacked the perspective to see that the browser is not like an application, but like an OS. i.e. what it really needs to do is to take in and run foreign code (including low level code) safely and coordinate outputs to the screen (Google is just starting to realize this with NaCl after much prodding and beating.)
I think everyone can see the implications of these two perspectives and what they enable or block
Some of the implications, anyway. The benefits of the OS-perspective are clear. Once it hits its stride, there will be no (technical) barriers to deploying the sorts of systems that we talk about here (Croquet-Worlds-Frank-OMeta-whatnot). Others will be doing their own cool things, and there will be much creativity and innovation.
However, elsewhere in this thread it is noted that the HTML-web is structured-enough to be indexable, mashupable, and so forth. It makes me wonder: is there a risk that the searchability, etc. of the web will be degraded by the appearance of a number of mutually-incompatible better-than-HTML web technologies? Probably not... in the worst case, someone who wants to be searchable can also publish in the "legacy" format.
However, can we do better than that? I guess the answer depends on which aspect of the status quo we're trying to improve on (searchability, mashups, etc). For search, there must be plenty of technologies that can improve on HTML by decoupling search-metadata from presentation/interaction (such as OpenSearch, mentioned elsewhere in this thread). Mashups seem harder... maybe it needs to happen organically as some of the newly-possible systems find themselves converging in some areas.
But I'm not writing because I know the answers, but rather the opposite. What do you think?
Cheers,
Josh
Post by Alan Kay
Cheers,
Alan
Sent: Tue, May 31, 2011 7:16:20 AM
Subject: Re: [fonc] Alternative Web programming models?
Thanks Merik,
I've read/watch the OOPSLA'97 keynote before, but hadn't seen the first video.
I'm having problems with the first one(the talk at UIUC). Has anyone been able to watch past the first hour. I get up to the point where Alex speaks and it freezes.
I've just recently read Roy Fielding's dissertation on the architecture of the Web. Two prominent features of web architecture are the (1) client-server hierarchical style and (2) the layering abstraction style. My take away from that is how all of abstraction layers of the web software stack get in the way of the applications that want to use the machine. Style 1 is counter to the notion of the 'no centers' principle and is very limiting when you consider different classes of applications that might involve many entities with ill-defined relationships. Style 2, provides for separation of concerns and supports integration with legacy systems, but incurs so much overhead in terms of structural complexity and performance. I think the stuff about web sockets and what was discussed in the Erlang interview that Micheal linked to in the 1st reply is relevant here. The web was designed for large grain interaction between entities, but many application domain problems don't map to that. Some people just want pipes or channels to exchange messages for fine-grained interactions, but the layer cake doesn't allow it. This is where you get the feeling that the architecture for rich web apps is no-architecture, just piling big stones atop one another.
I think it would be very interesting for someone to take the same approach to networked-based application as Gezira did with graphics (or the STEP project in general) as far assessing what's needed in a modern Internet-scale hypermedia architecture.
[1] Alan Kay, How Complex is "Personal Computing"?". Normal" Considered Harmful. October 22, 2009, Computer Science department at UIUC.
http://media.cs.uiuc.edu/seminars/StateFarm-Kay-2009-10-22b.asx
(also see http://www.smalltalk.org.br/movies/ )
[2] Alan Kay, "The Computer Revolution Hasn't Happened Yet", October 7, 1997, OOPSLA'97 Keynote.
Transcript http://blog.moryton.net/2007/12/computer-revolution-hasnt-happened-yet.html
Video http://ftp.squeak.org/Media/AlanKay/Alan%20Kay%20at%20OOPSLA%201997%20-%20The%20computer%20revolution%20hasnt%20happened%20yet.avi
(also see http://www.smalltalk.org.br/movies/ )
Merik
All,
A criticism by Dr. Kay, has really stuck with me. I can't remember the specific criticism and where it's from, but I recall it being about the how wrong the web programming model is. I imagine he was referring to how disjointed, resource inefficient it is and how it only exposes a fraction of the power and capability inherent in the average personal computer.
So Alan, anyone else,
what's wrong with the web programming mode and application architecture? What programming model would work for a global-scale hypermedia system? What prior research or commercial systems have any of these properties?
The web is about the closest we've seen to a ubiquitous deployment platform for software, but the confluence of market forces and technical realities endanger that ubiquity because users want full power of their devices plus the availability of Internet connectivity.
-Cornelius
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Cornelius Toole
2011-06-09 20:24:47 UTC
Permalink
Josh, All

I'm not sure I understand your last sentence, nor how you suggest we might
Post by Josh Gargus
mitigate the opacity of non-HTML client technologies. Let's say that you
embed in an HTML page a view into a persistent 3d virtual environment like
OpenQwaq. Can you help me understand how we might expand the ways in which
we implement links to encompass the rich, persistent, dynamic content in
such an environment? (this is basically my original question in a more
concrete context)
I have some background in scientific and information visualization. A couple
years I met, T.J Jankun-Kelly, a researcher at Mississippi State University,
who did dissertation work on a formal model for the visualization
exploration process, called the visualization parameter settings or p-set:

http://scholar.google.com/scholar?q=author%3Ajankun-kelly+visualization+exploration&hl=en&btnG=Search&as_sdt=1%2C25&as_sdtp=on

For any given visualization result (e.g. typically an image), its p-set
encapsulates all the information needed to reproduce that result. Roughly a
viz p-set is a set of references to the the data, the algorithms/filters
that process that data, and run-time parameters for a system involved in
producing a visualization result. VR is very similar to data visualization,
so suppose you could formulate a usable VR exploration model and model for
user interaction. If you wanted reproduce a moment or set of moment(s)
within a virtual world, you don't recapture & replay the user interaction
event streams you record & recompute VR state based on deltas between each
p-set for some time or space instance. Any deterministic process or property
should be able to be represented within the model. I don't expect be able to
link to a perfect reproduction of the way simulated leaves blew in the wind
based on (pseudo)random algorithm. It then becomes an issue of deciding the
resolution and periodicity of parameter capture. Now to make a VR time-space
slab linkable, you need to be able to encode the VR-set and VR-set deltas
within some URI format. Perhaps it's something based on VRML/X3D or even
haptic encoding:
http://scholar.google.com/scholar?hl=en&q=haptic+compression+encoding+author%3Akammerl&btnG=Search&as_sdt=0%2C25&as_ylo=&as_vis=0

(disclaimer: I have virtually no expertise in VR or tele-haptics or anything
else for that matter.)

To address different VR engine implementations, you extend the VR-set model
based on some type of domain semantics to formulate a generalized VR-set
(let's VRML/X3D plus a transformation model for dynamism) . One could then
define adaptor interfaces to map the transformation of a general VR-set
model to a transformation of a specific implementation's internal model.

To the extent, that process models could be devised for a given application
domain you could take similar approaches to make other types of computation
linkable and searchable.
Post by Josh Gargus
Some of the implications, anyway. The benefits of the OS-perspective are
Post by Josh Gargus
clear. Once it hits its stride, there will be no (technical) barriers to
deploying the sorts of systems that we talk about here
(Croquet-Worlds-Frank-OMeta-whatnot). Others will be doing their own cool
things, and there will be much creativity and innovation.
However, elsewhere in this thread it is noted that the HTML-web is
structured-enough to be indexable, mashupable, and so forth. It makes me
wonder: is there a risk that the searchability, etc. of the web will be
degraded by the appearance of a number of mutually-incompatible
better-than-HTML web technologies? Probably not... in the worst case,
someone who wants to be searchable can also publish in the "legacy" format.
Will the web be degraded by the appearance (or should I say proliferation)
of mutually-incompatible, but better than HTML technologies?
First off I would ask better in what way?
Better in every way! Better languages, better render-target and
interaction-model than provided by the DOM, better models for distributed
computation.
It appears that I was assuming more shared context on this list than
actually exists. I'll try to fine-tune my question below.
I think the from a user experience POV, I think when people say 'better'
they mean richer interactively, which implies better graphical capabilities,
access to special hardware (e.g. camera, mic, accelerometer, GPS, GPUs,
GPGPU, etc.), faster startup, robustness against network failure and so on.
Sure, all of these.
Up until now and may be for some time into the future, the tradeoff of the
web as computing platform versus OS-native ones has been about generality
versus optimizability as enabled by resource specialization(or some such
related thing). Some use cases map well to the general, others not. Only
within the last 3 years have we seen mass-market deployment and use of
Internet-scale software not entirely based on HTML/JS/CSS client
technologies. This has been mostly in the form of native mobile apps. But
these are still web apps, many of them still use the Web as connector (e.g.
HTTP), but the UI is realized using OS-native frameworks. And so what we
often lose is data transparency and portability. For instance, the Our
Choice interactive book app on iOS looks and feels great, but its worse than
the web in that I cannot even copy text from it. It, like many of the
non-ePub digital publications, is just an archive of images and audio-video
content pre-baked into a handful of layouts.
Right, this loss of transparency and portability is precisely the type of
downside I'm envisioning when people start deploying new "OSes" on the
browser "hypervisor" (using these terms as I defined them in my previous
email).
It's not that non-HTML client technologies degrade the web in and of
itself, take PDF for instance.
I didn't mean to imply this. We're on the same page here.
Many PDF documents are linkable and searchable on the web. But this is
because software to read PDF is widely deployed, which was enabled by a
widespread access to the PDF standard. I think we can mitigate the opacity
introduced by non-HTML client technologies if we expand the ways in which we
implement links. Imagine encapsulating a reference to the computation (or
its type) that would resolve a less-transparent data format.
I'm not sure I understand your last sentence, nor how you suggest we might
mitigate the opacity of non-HTML client technologies. Let's say that you
embed in an HTML page a view into a persistent 3d virtual environment like
OpenQwaq. Can you help me understand how we might expand the ways in which
we implement links to encompass the rich, persistent, dynamic content in
such an environment? (this is basically my original question in a more
concrete context)
Probably not... in the worst case, someone who wants to be searchable can
Post by Josh Gargus
also publish in the "legacy" format.
The 'legacy' format is the point. I would say that the web isn't 'legacy',
I used quotes to indicate that I was using the term as a shorthand label,
rather than as descriptive.
but what makes legacy systems visible. If the Internet is a world of many
diverse islands of computational and network resources, the Web architecture
define languages for these islands to communicate.
The issues concerning web client UI rendering technologies are orthogonal
to other fundamental issues of the Web architecture.
Conceptually, yes. In practice, no, because the HTML/DOM render-target is
also the lingua franca that makes the Web searchable and mashupable.
I think what I've been really trying to get at with my initial question is
this. If the goal of the web architecture is for connecting resources, the
current architecture does well at connecting data, but not computation, not
at scale. Perhaps a theme of the developments around HTML5 is evolving the
Web architecture to better support connecting applications. But because the
Web was designed for exchanging representations of application state
(basically large-grained data), so many applications won't fit this model.
Imagine trying to run a high-frequency equity trading network atop the FedEx
air freight network, or worse the US Postal Service (or chose your local
postal service). Add to the fact of a client-server hierarchy and now you
have to deal with bottlenecks at those endpoints. Many web-based
applications are designed around this bottleneck, and so I see us running
into conceptual and structural scaling issues.
Agreed. It appears that I was re-asking a variant of your original question.
Cheers,
Josh
Post by Josh Gargus
Hi Cornelius
There are lots of egregiously wrong things in the web design. Perhaps one
of the simplest is that the browser folks have lacked the perspective to see
that the browser is not like an application, but like an OS. i.e. what it
really needs to do is to take in and run foreign code (including low level
code) safely and coordinate outputs to the screen (Google is just starting
to realize this with NaCl after much prodding and beating.)
I think everyone can see the implications of these two perspectives and
what they enable or block
Some of the implications, anyway. The benefits of the OS-perspective are
clear. Once it hits its stride, there will be no (technical) barriers to
deploying the sorts of systems that we talk about here
(Croquet-Worlds-Frank-OMeta-whatnot). Others will be doing their own cool
things, and there will be much creativity and innovation.
However, elsewhere in this thread it is noted that the HTML-web is
structured-enough to be indexable, mashupable, and so forth. It makes me
wonder: is there a risk that the searchability, etc. of the web will be
degraded by the appearance of a number of mutually-incompatible
better-than-HTML web technologies? Probably not... in the worst case,
someone who wants to be searchable can also publish in the "legacy" format.
However, can we do better than that? I guess the answer depends on which
aspect of the status quo we're trying to improve on (searchability, mashups,
etc). For search, there must be plenty of technologies that can improve on
HTML by decoupling search-metadata from presentation/interaction (such as
OpenSearch, mentioned elsewhere in this thread). Mashups seem harder...
maybe it needs to happen organically as some of the newly-possible systems
find themselves converging in some areas.
But I'm not writing because I know the answers, but rather the opposite.
What do you think?
Cheers,
Josh
Cheers,
Alan
------------------------------
*Sent:* Tue, May 31, 2011 7:16:20 AM
*Subject:* Re: [fonc] Alternative Web programming models?
Thanks Merik,
I've read/watch the OOPSLA'97 keynote before, but hadn't seen the first video.
I'm having problems with the first one(the talk at UIUC). Has anyone been
able to watch past the first hour. I get up to the point where Alex speaks
and it freezes.
I've just recently read Roy Fielding's dissertation on the architecture of
the Web. Two prominent features of web architecture are the (1)
client-server hierarchical style and (2) the layering abstraction style. My
take away from that is how all of abstraction layers of the web software
stack get in the way of the applications that want to use the machine. Style
1 is counter to the notion of the 'no centers' principle and is very
limiting when you consider different classes of applications that might
involve many entities with ill-defined relationships. Style 2, provides for
separation of concerns and supports integration with legacy systems, but
incurs so much overhead in terms of structural complexity and performance. I
think the stuff about web sockets and what was discussed in the Erlang
interview that Micheal linked to in the 1st reply is relevant here. The web
was designed for large grain interaction between entities, but many
application domain problems don't map to that. Some people just want pipes
or channels to exchange messages for fine-grained interactions, but the
layer cake doesn't allow it. This is where you get the feeling that the
architecture for rich web apps is no-architecture, just piling big stones
atop one another.
I think it would be very interesting for someone to take the same approach
to networked-based application as Gezira did with graphics (or the STEP
project in general) as far assessing what's needed in a modern
Internet-scale hypermedia architecture.
Dr Alan Kay addressed the html design a number of times in his lectures
[1] Alan Kay, How Complex is "Personal Computing"?". Normal" Considered
Harmful. October 22, 2009, Computer Science department at UIUC.
http://media.cs.uiuc.edu/seminars/StateFarm-Kay-2009-10-22b.asx
(also see http://www.smalltalk.org.br/movies/ )
[2] Alan Kay, "The Computer Revolution Hasn't Happened Yet", October 7,
1997, OOPSLA'97 Keynote.
Transcript
http://blog.moryton.net/2007/12/computer-revolution-hasnt-happened-yet.html
Video
http://ftp.squeak.org/Media/AlanKay/Alan%20Kay%20at%20OOPSLA%201997%20-%20The%20computer%20revolution%20hasnt%20happened%20yet.avi
(also see http://www.smalltalk.org.br/movies/ )
Merik
All,
A criticism by Dr. Kay, has really stuck with me. I can't remember the
specific criticism and where it's from, but I recall it being about the how
wrong the web programming model is. I imagine he was referring to how
disjointed, resource inefficient it is and how it only exposes a fraction of
the power and capability inherent in the average personal computer.
So Alan, anyone else,
what's wrong with the web programming mode and application architecture?
What programming model would work for a global-scale hypermedia system? What
prior research or commercial systems have any of these properties?
The web is about the closest we've seen to a ubiquitous deployment
platform for software, but the confluence of market forces and technical
realities endanger that ubiquity because users want full power of their
devices plus the availability of Internet connectivity.
-Cornelius
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
cornelius toole, jr. | ***@tigers.lsu.edu | mobile: 601.212.3045
Casey Ransberger
2011-06-09 21:38:49 UTC
Permalink
Post by Josh Gargus
Conceptually, yes. In practice, no, because the HTML/DOM render-target is also the lingua franca that makes the Web searchable and mashupable.
So I'd like to first point out that you're making a great point here, so I hope it isn't too odd that I'm about to try to tear it down. Devil's advocate?

While the markup language has proven quite mashable/searchable, I think it's worth noting that just about *any* structured, broadly applied _convention_ will give you that; it could have been CSV, if SGML hadn't been tapped.

One of the nicest things about markup has been free-to-cheap accessibility for blind folks... with most languages you can embed in a web page, this tends to go out the door quickly, and AJAX probably doesn't help either. If the browser was an operating system, I imagine we'd find a more traditional route to this kind of accessibility, which is about text-to-speech, and if you have the text, you should be able to search it too.

Take a moment to imagine how different the world might be today if the convention had been e.g. s-exprs. How many linguistic context shifts do you think you'd need to build a web application in that world? While I love programming languages, when I have a deadline? bouncing back and forth between five or six languages probably hurts my productivity.

Not to mention that we end up compensating in industry by hyperspecializing. I wish it was easier to hire people who just knew how to code, instead of having to qualify them as "backend vs. frontend." I mean seriously. It's like specializing in putting a patty that someone else cooked on a bun, in terms of personal empowerment. Factory work, factory thinking. I'm the button pusher and your job is to assemble the two parts that I send down the line every five seconds when I push the button. Patty, bun.

I hoped Seaside might help a touch, and the backend guys all seem to really dig it (hey, now we can make web apps all by ourselves, without the burden of the wrangling boring markup-goop) but the frontend folks I've talked to (in-trench, not online) are hard pressed to have time to learn a whole new system.

Since they build the part that the stakeholders actually see, I think they end up with more in the way of random asks from business folks, which have this way of making it clear over engineering managers, etc.

There's also the problem wherein you have a whole bunch of people out there who've never seen anything else and don't have any context for why someone like me might be displeased with the current state of affairs.

It'd be nice to be able to sort out how many of these problems are cognitive+technical versus cultural/social.

The most interesting thing I've seen so far was when I was at a (now sold/defunct) company called Snapvine. We integrated telephony with social networking sites. Anyway, I spent more time looking at MySpace than I wanted to, and was stunned to discover:

Kids with MySpace pages were learning HTML and CSS just to trick out and add something unique to their profiles, and didn't seem to relate what they were doing to software at all. I wasn't sure if I was supposed to smile or frown when that realization hit me.

That's about when I started talking to people about HyperCard again, which is ultimately how I found my way to Squeak, and then this list.
Ian Piumarta
2011-06-09 21:52:26 UTC
Permalink
Post by Casey Ransberger
Take a moment to imagine how different the world might be today if the convention had been e.g. s-exprs.
http://hop.inria.fr/
Casey Ransberger
2011-06-10 00:01:16 UTC
Permalink
You know this isn't usable with the browser I have handy at the moment, but I can already see it. Really interesting, I can imagine it would look more or less like this. Thanks for putting me onto this, Ian.
Post by Ian Piumarta
Post by Casey Ransberger
Take a moment to imagine how different the world might be today if the convention had been e.g. s-exprs.
http://hop.inria.fr/
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Josh Gargus
2011-06-10 07:54:46 UTC
Permalink
Post by Casey Ransberger
Post by Josh Gargus
Conceptually, yes. In practice, no, because the HTML/DOM render-target is also the lingua franca that makes the Web searchable and mashupable.
So I'd like to first point out that you're making a great point here, so I hope it isn't too odd that I'm about to try to tear it down. Devil's advocate?
Sure, why not?

You said a bunch of interesting things that circle around a point. Rather than respond to some of them individually, which would miss the point, I'm going to have to ruminate about the whole for a while.

Cheers,
Josh
John Nilsson
2011-06-14 14:58:02 UTC
Permalink
I'm not sure how OMeta would help. At a textual level it's just a
PEG-parser.

I can see how OMeta will make it easier to step away from parsing text
though. Which is precisly the point, text is a bad representation to work
in.

I had some thoughts about how to approach the issue. I was thinking that you
could represent the language in a more semanticaly rich form such as a RAG
stored in a graph database. Then languages would be composed by declaring
lenses between them.

As long as there is a lens to a editor dsl you could edit the labguage in
that editor. If you had a lens from SQL to Java (for example via jdbc) you
could ebed SQL expressions in java code. Give transitive lenses it would
also be a system supporting much reuse. A new DSL could then leverage the
semantic editing support allredy created for other languages.

BR,
John

Sent from my phone
Tristan Slominski
2011-06-14 15:14:59 UTC
Permalink
Post by John Nilsson
I had some thoughts about how to approach the issue. I was thinking that
you could represent the language in a more semanticaly rich form such as a
RAG stored in a graph database. Then languages would be composed by
declaring lenses between them.
As long as there is a lens to a editor dsl you could edit the labguage in
that editor. If you had a lens from SQL to Java (for example via jdbc) you
could ebed SQL expressions in java code. Give transitive lenses it would
also be a system supporting much reuse. A new DSL could then leverage the
semantic editing support allredy created for other languages.
BR,
John
Just for completeness, the lenses you describe here remind me of OMeta's
foreign rule invocation:

from http://www.vpri.org/pdf/tr2008003_experimenting.pdf

see 2.3.4 Foreign Rule Invocation p. 27 of paper, p. 46 of pdf

So, if you don't like the PEG roots of OMeta, perhaps it's a good reference
that already works?

Cheers,

Tristan
John Nilsson
2011-06-14 15:36:54 UTC
Permalink
Thanks for the pointer. I'll have a look.

BR,
John

Sent from my phone
Post by Tristan Slominski
Post by John Nilsson
I had some thoughts about how to approach the issue. I was thinking that
you could represent the language in a more semanticaly rich form such as a
RAG stored in a graph database. Then languages would be composed by
declaring lenses between them.
As long as there is a lens to a editor dsl you could edit the labguage in
that editor. If you had a lens from SQL to Java (for example via jdbc) you
could ebed SQL expressions in java code. Give transitive lenses it would
also be a system supporting much reuse. A new DSL could then leverage the
semantic editing support allredy created for other languages.
BR,
John
Just for completeness, the lenses you describe here remind me of OMeta's
from http://www.vpri.org/pdf/tr2008003_experimenting.pdf
see 2.3.4 Foreign Rule Invocation p. 27 of paper, p. 46 of pdf
So, if you don't like the PEG roots of OMeta, perhaps it's a good reference
that already works?
Cheers,
Tristan
Julian Leviston
2011-06-14 15:39:00 UTC
Permalink
Yeah, I think there is a fair amount of deep digestion required to fully grok these ideas, personally. Haha that sounds disturbing. :)
John Nilsson
2011-06-14 19:07:44 UTC
Permalink
On Tue, Jun 14, 2011 at 5:14 PM, Tristan Slominski
Post by Tristan Slominski
Just for completeness, the lenses you describe here remind me of OMeta's
from http://www.vpri.org/pdf/tr2008003_experimenting.pdf
see 2.3.4 Foreign Rule Invocation p. 27 of paper, p. 46 of pdf
So, if you don't like the PEG roots of OMeta, perhaps it's a good reference
that already works?
The foreign rule invocation let you reuse other grammar but you still
have to carefully declare how the merged grammar should behave. What I
as aiming for was a more dynamic approach in that the "merged" grammar
doesn't exist as such but is just an execution state derived from the
combined program fragments and available lenses. To continue the
Java/SQL example, lets say I had a program like this:

public int totalAmmount(InvoiceNo invoceNo) {
return SELECT SUM(ammount) FROM Invoices WHERE invoiceNo = :InvoiceNo;
}

To make this work I would need three things
1. The java grammar in which the method is declared
2. The SQL grammar in which the expression is declared
3. Something that can translate an SQL expression to a Java
expression, and Java type-errors to SQL type-errors (the lens).
4. A way to annotate the syntax to distinguish Java-syntax from SQL-syntax.

It is step 4 that I think makes it hard to keep a text representation.
A generic syntax to separate any given language would probably be very
convoluted. OTOH extending all languages one want to include to
support grammar switches means that you will end up having to create
those extensions yourself (which could be hard) or live at the mercy
of the syntax-component you depend on. So my fix is to make the
separation a hidden thing, which means the program needs to be
represented in something that allows such hidden things (and I don't
think Unicode control characters is the way to go here).


Btw, if it wasn't clear form the context before, by RAG I meant a
Reference Attributed Grammar.


BR,
John
Michael FIG
2011-06-14 19:14:30 UTC
Permalink
Hi,
So my fix is to make the separation a hidden thing, which means the
program needs to be represented in something that allows such hidden
things (and I don't think Unicode control characters is the way to go
here).
Why not crib a hack from JavaDoc and make your nested syntax embedded in
comments in the host language?
--
Michael FIG <***@fig.org> //\
http://michael.fig.org/ \//
BGB
2011-06-14 20:04:20 UTC
Permalink
Post by Michael FIG
Hi,
So my fix is to make the separation a hidden thing, which means the
program needs to be represented in something that allows such hidden
things (and I don't think Unicode control characters is the way to go
here).
Why not crib a hack from JavaDoc and make your nested syntax embedded in
comments in the host language?
or, like in my languages (generally for other things):
use a keyword...

reusing prior example:

public int totalAmmount(InvoiceNo invoceNo)
{
return SQLExpr {
SELECT SUM(ammount) FROM Invoices WHERE invoiceNo = :InvoiceNo;
}
}


granted, yes, this requires the ability to plug things into the parser
and compiler, which isn't really possible with a standard Java compiler,
but is not an inherent limitation of text IMO.

another person, elsewhere (Thomas Mertes, who endlessly flogs his
"Seed7" language) is basically really into supporting stuff like this
in-language (absent using compiler plug-ins).

in my case, usually a plugin would be registered with the parser and the
compiler, where the parser plugin would detect and respond to a certain
keyword, and the compiler plugin would respond to specific added AST
node types.


some other things in my language is done via attributes and modifier
blocks, but generally this is for things which don't otherwise change
the basic syntax.

native {
... FFI related stuff ...
}

$[ifdef(FOO)] {
... conditionally compiled stuff...
}
...

yes, yes, my attribute syntax is nasty...


embedding things in comments technically works, but is IMO a nasty way
to do extensions, in much the same way as is the "embed complex program
logic in strings" strategy.

slightly less nasty (but still not really ideal IMO), is to use special
purpose preprocessing, like how Objective-C and Objective-C++ work.

I can already do some of the above with C, via having a more "expanded"
version of a C-style preprocessor, but generally don't do this sort of
thing unless really necessary (I prefer strategies that can be done
purely within the language, rather than those which would require more
heavy-handed methods, when possible...).


or such...
John Nilsson
2011-06-14 21:31:09 UTC
Permalink
On both questions the answer is basically that Java was an example. I was
looking for a general solution. Something that would work withoug prior
assumptions about the languages involved.

The problem I was thinking about was how to provide an infrastructure where
in anyone could be a language designer and almost for free get all the
tooling support required for the language to gain traction. It seems to me
that the effort required to go from an itch to a fully mainstream language
is waaaaay to high. And partly to blame why we are still introducing
inventions from the sixties in current mainstream languages.

BR,
John

Sent from my phone
Den 14 jun 2011 22:10 skrev "BGB" <***@gmail.com>:
Continue reading on narkive:
Loading...