Discussion:
[fonc] Reading Maxwell's Equations
John Zabroski
2010-02-26 23:15:20 UTC
Permalink
I've been following this project for a long time, and only recently joined
the mailing list.

For a long time, I did not fully understand Alan Kay's thoughts on software
architecture, despite reading many of his press interviews and watching his
public presentations. What I've come to feel is that Alan has a partially
complete vision, and some inconsistent viewpoints likely blocking a complete
vision of computer science.

For example, I had heard Alan refer to Lisp as Maxwell's Equations of
computer science, but did not fully grasp what that really meant. When I
first played around with OMeta, I described it to a friend at MIT as
"ridiculously meta". This idea was pretty much confirmed by Ian Piumarta's
"widespread unreasonable behavior" whitepaper, which basically argues that
we can't truly do "software engineering" until we actually know what that
means, so the best approach to go with is extremely late-binding. The idea
to use syntax-directed interpretation via PEG is an obvious way to achieve
this, as it addresses one of the three key stumbling blocks to building real
"software engineering" solutions -- size.

But I am not convinced VPRI really has a solution to the remaining two
stumbling blocks: complexity and trustworthiness.

In terms of complexity, I think I'll refer back to Alan Kay's 1997 OOPSLA
speech, where he talks about doghouses and cathedrals. Alan mentions Gregor
Kiczales' The Art of the Meta Object Protocol as one of the best books
written in the past 10 years on OOP-work. I don't really understand this,
because AMOP is entirely about extending the block-structured, procedural
Alejandro F. Reimondo
2010-02-26 23:59:47 UTC
Permalink
John,
Where else should I look?
In my opinion what is "missing" in the languages
formulations is sustainability of the system. [*]
In case of formula/abstract based declaration of systems
all alternatives make people put on the idea(L) side
and not in the system itself (the natural side).
Smalltalk is the only alternative of sustainable system
development used commertially today.
What is missing in your interesting email and also in
interesting projects like fonc is the consideration of the
development of open systems; also called complex
systems in literature, but I prefer to do not use that
word; and use "open" that is unfortunatelly used for
propaganda, but is more acurate to define sustainable
systems (open in contents and through time).
imho the real value of Smalltalk is not it's formulation,
nor the "contents", it is the evidence that a sustainable
system for software development is available today (and
during the last +30years) and people can start to
surpass the limits of object orientation.

Today there is not too much information about sustainable
systems, the most valuable source are the smalltalks
systems (the evidence).
And also I should say that a lot of distractive publications
can be found in literature and internet, e.g. promoting
confusion between auto-booting/defining a system
and sustainability of a system...

cheers,
Ale.

[*] been more rude I would say "what is missing is the system" ;-)
The system exists in the future.


----- Original Message -----
From: John Zabroski
To: ***@vpri.org
Sent: Friday, February 26, 2010 8:15 PM
Subject: [fonc] Reading Maxwell's Equations


I've been following this project for a long time, and only recently joined the mailing list.

For a long time, I did not fully understand Alan Kay's thoughts on software architecture, despite reading many of his press interviews and watching his public presentations. What I've come to feel is that Alan has a partially complete vision, and some inconsistent viewpoints likely blocking a complete vision of computer science.

For example, I had heard Alan refer to Lisp as Maxwell's Equations of computer science, but did not fully grasp what that really meant. When I first played around with OMeta, I described it to a friend at MIT as "ridiculously meta". This idea was pretty much confirmed by Ian Piumarta's "widespread unreasonable behavior" whitepaper, which basically argues that we can't truly do "software engineering" until we actually know what that means, so the best approach to go with is extremely late-binding. The idea to use syntax-directed interpretation via PEG is an obvious way to achieve this, as it addresses one of the three key stumbling blocks to building real "software engineering" solutions -- size.

But I am not convinced VPRI really has a solution to the remaining two stumbling blocks: complexity and trustworthiness.

In terms of complexity, I think I'll refer back to Alan Kay's 1997 OOPSLA speech, where he talks about doghouses and cathedrals. Alan mentions Gregor Kiczales' The Art of the Meta Object Protocol as one of the best books written in the past 10 years on OOP-work. I don't really understand this, because AMOP is entirely about extending the block-structured, procedural message passing approach to OO using
Gerry J
2010-02-27 00:50:22 UTC
Permalink
John, et al
I am interested in what you think are the better approach alternatives
to handle complexity and size (etc), what criteria should apply and why
one ranks higher than another.
For example, should a language support both actors and be model driven?
Is a mix of type inference and explicit typing with operators (like
OCAML) better than extremely late binding, and for what?
Should there be a hierarchy of syntax compatible languages, with
different restrictions, say extremely late binding at the top, and fully
typed and OS or device driver oriented at the bottom?
(ie pick the right tool in the family, size of hand held screwdriver up
to exchangeable bits for a power tool).
Thanks for your interesting references and insights.

Regards,
Gerry Jensen
Andrey Fedorov
2010-02-27 01:35:35 UTC
Permalink
I've been reading Roy and Haridi's "Concepts, Techniques, and Models of
Computer Programming" [1], and I see it as a great practical approach to
VPRI's ideas on the principles of programming languages. Is there anyone
more autoritative here who could chime in on the intellectual connection (if
there is one)?

Cheers,
Andrey

1. http://www.amazon.com/dp/0262220695
Post by Gerry J
John, et al
I am interested in what you think are the better approach alternatives to
handle complexity and size (etc), what criteria should apply and why one
ranks higher than another.
For example, should a language support both actors and be model driven?
Is a mix of type inference and explicit typing with operators (like OCAML)
better than extremely late binding, and for what?
Should there be a hierarchy of syntax compatible languages, with different
restrictions, say extremely late binding at the top, and fully typed and OS
or device driver oriented at the bottom?
(ie pick the right tool in the family, size of hand held screwdriver up to
exchangeable bits for a power tool).
Thanks for your interesting references and insights.
Regards,
Gerry Jensen
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Andrey Fedorov
2010-02-27 01:52:16 UTC
Permalink
... what criteria should apply [to complexity] and why one ranks
higher than another.

Don't have a chance to look up the source, but Dijkstra has a
wonderful observation about every implementation of a solution in CS
includes two kinds of complexity: that intrinsic to the problem, and
that introduced by the tools you're using to express it in a way
computers can interpret.

While not a well defined measure, I think this is a powerful intuition.

- Andrey

Sent from my cell. Please forgive abbreviations and typos.
John, et al
I am interested in what you think are the better approach
alternatives to handle complexity and size (etc), what criteria
should apply and why one ranks higher than another.
For example, should a language support both actors and be model driven?
Is a mix of type inference and explicit typing with operators (like
OCAML) better than extremely late binding, and for what?
Should there be a hierarchy of syntax compatible languages, with
different restrictions, say extremely late binding at the top, and
fully typed and OS or device driver oriented at the bottom?
(ie pick the right tool in the family, size of hand held screwdriver
up to exchangeable bits for a power tool).
Thanks for your interesting references and insights.
Regards,
Gerry Jensen
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
... what criteria should apply [to complexity] and why one ranks
higher than another.

Don't have a chance to look up the source, but Dijkstra has a
wonderful observation about every implementation of a solution in CS
includes two kinds of complexity: that intrinsic to the problem, and
that introduced by the tools/lan

- Andrey

Sent from my cell. Please forgive abbreviations and typos.
John, et al
I am interested in what you think are the better approach
alternatives to handle complexity and size (etc), what criteria
should apply and why one ranks higher than another.
For example, should a language support both actors and be model driven?
Is a mix of type inference and explicit typing with operators (like
OCAML) better than extremely late binding, and for what?
Should there be a hierarchy of syntax compatible languages, with
different restrictions, say extremely late binding at the top, and
fully typed and OS or device driver oriented at the bottom?
(ie pick the right tool in the family, size of hand held screwdriver
up to exchangeable bits for a power tool).
Thanks for your interesting references and insights.
Regards,
Gerry Jensen
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Andrey Fedorov
2010-02-27 07:02:20 UTC
Permalink
Post by Gerry J
... what criteria should apply [to complexity] and why one ranks higher
than another.
Ack, please forgive the copy/paste fumble. I found the source of Dijkstra's
observation, and it seems I had added quite a bit of my own conclusions. His
was:

For us scientists it is very tempting to blame the lack of education of the
Post by Gerry J
average engineer, the short-sightedness of the managers and the malice of
the entrepreneurs for this sorry state of affairs, but that won't do. You
see, while we all know that unmastered complexity is at the root of the
misery, we do not know what degree of simplicity can be obtained, nor to
what extent the intrinsic complexity of the whole design has to show up in
the interfaces. We simply do not know yet the limits of disentanglement. We
do not know yet whether intrinsic intricacy can be distinguished from
accidental intricacy. We do not know yet whether trade-offs will be
possible. We do not know yet whether we can invent for intricacy a
meaningful concept about which we can prove theorems that help. To put it
bluntly, we simply do not know yet what we should be talking about, but that
should not worry us, for it just illustrates what was meant by "intangible
goals and uncertain rewards". (November 19th, 2000 - EWD1304<http://www.cs.utexas.edu/users/EWD/transcriptions/EWD13xx/EWD1304.html>
)
I think he's a little too doom-and-gloom here. I think it's fair to say that
a given solutions to a problem has some intrinsic complexity - that
dependent on which cognitive models we decide use to consider the problem.
Exploring the intrinsic complexity of problems and solutions is the domain
of mathematics.

Then there is the related complexity of designing a machine to derive the
solution for a given instance of the problem (or as is more common nowadays,
designing an algorithm to run on modern hardware). The latter complexity is
"extrinsic" to the problem itself, and also, in some sense, "extrinsic" to
our cognitive reasoning about it.

EWD seems most pessimistic about the meta-properties of our models -
measuring the complexities in relation to the problem and to each other. But
while modeling our cognitive models is a much younger domain, there's
certainly people asking these questions under the guise of "foundations of
mathematics" or "cogsci of mathematics" or "cognitive science" in general.
Personally, I find Lakoff and Núñez's "Where Mathematics Comes From" rather
convincing in its broad strokes, even if lacking in testable hypotheses.

Cheers,
Andrey
Wesley Smith
2010-02-27 02:01:30 UTC
Permalink
On Fri, Feb 26, 2010 at 3:59 PM, Alejandro F. Reimondo
Post by Alejandro F. Reimondo
John,
Where else should I look?
In my opinion what is "missing" in the languages
 formulations is sustainability of the system. [*]
In case of formula/abstract based declaration of systems
 all alternatives make people put on the idea(L) side
 and not in the system itself (the natural side).
Smalltalk is the only alternative of sustainable system
 development used commertially today.
What is missing in your interesting email and also in
 interesting projects like fonc is the consideration of the
 development of open systems; also called complex
 systems in literature, but I prefer to do not use that
 word; and use "open" that is unfortunatelly used for
 propaganda, but is more acurate to define sustainable
 systems (open in contents and through time).
Do you have any references on "open" or "complex" systems research in
language design? I've tried googling and without being familiar with
the main authors in the field am finding it hard to sift through the
detritus.

thanks!
wes
Kurt Stephens
2010-02-28 20:19:24 UTC
Permalink
Post by Alejandro F. Reimondo
John,
Where else should I look?
In my opinion what is "missing" in the languages
formulations is sustainability of the system. [*]
In case of formula/abstract based declaration of systems
all alternatives make people put on the idea(L) side
and not in the system itself (the natural side).
Smalltalk is the only alternative of sustainable system
development used commertially today.
Smalltalk did not spawn an entire industry of specialized hardware like
Lisp. However Lisp hardware is a collector's item now. :)

There are plenty of commercial projects using Common Lisp today and from
what I can tell, there has been renewed, grassroots interest in Lisp (CL
and Scheme) over the last 5 years. Smalltalk is not the only
alternative. Both have ANSI standard specifications.

KAS
Jecel Assumpcao Jr
2010-02-28 19:44:13 UTC
Permalink
Post by Kurt Stephens
Smalltalk did not spawn an entire industry of specialized hardware like
Lisp.
There was a lot more development in that area than most people are aware
of:

http://www.merlintec.com:8080/hardware/26
Post by Kurt Stephens
However Lisp hardware is a collector's item now. :)
Only two architectures from that era have modern implementations. Being
*the* C/Unix machine for years and years didn't save the VAX, for
example. So I am for trying again to see what happens.

-- Jecel
John Zabroski
2010-03-06 02:34:28 UTC
Permalink
Post by Kurt Stephens
Post by Alejandro F. Reimondo
John,
Where else should I look?
In my opinion what is "missing" in the languages
formulations is sustainability of the system. [*]
In case of formula/abstract based declaration of systems
all alternatives make people put on the idea(L) side
and not in the system itself (the natural side).
Smalltalk is the only alternative of sustainable system
development used commertially today.
Smalltalk did not spawn an entire industry of specialized hardware like
Lisp. However Lisp hardware is a collector's item now. :)
There are plenty of commercial projects using Common Lisp today and from
what I can tell, there has been renewed, grassroots interest in Lisp (CL and
Scheme) over the last 5 years. Smalltalk is not the only alternative. Both
have ANSI standard specifications.
KAS
Have you read the Lisp Lore [1] book for a history of Lisp machines?

I am personally just 25 years old, and have been trying to buy a Symbolics
Genera machine on eBay for a year now, and just can't get one at a
reasonable price.

However, what I read in [1] is that the systems inherently were unstable in
terms of dynamic reconfiguration (Ale's main point about openness). I
personally believe any system should inherently support superstabilization
in its core. Superstabilization is generalization of Dijkstra's definition
of system stability.

If necessary. I can quote pages from this book that mentions the instability
of reconfiguring the system.

[1] LISP Lore: A Guide to Programming the LISP Machine by H. Bromley,
Richard Lamson ISBN-13: 978-0898382280
Pascal J. Bourguignon
2010-03-11 16:47:17 UTC
Permalink
Post by Alejandro F. Reimondo
John,
Where else should I look?
In my opinion what is "missing" in the languages
formulations is sustainability of the system. [*]
In case of formula/abstract based declaration of systems
all alternatives make people put on the idea(L) side
and not in the system itself (the natural side).
Smalltalk is the only alternative of sustainable system
development used commertially today.
Smalltalk did not spawn an entire industry of specialized hardware
like Lisp. However Lisp hardware is a collector's item now. :)
There are plenty of commercial projects using Common Lisp today and
from what I can tell, there has been renewed, grassroots interest in
Lisp (CL and Scheme) over the last 5 years. Smalltalk is not the
only alternative. Both have ANSI standard specifications.
KAS
Have you read the Lisp Lore [1] book for a history of Lisp machines?
I am personally just 25 years old, and have been trying to buy a
Symbolics Genera machine on eBay for a year now, and just can't get
one at a reasonable price.
Believe me, these are reasonable prices! (And don't forget to add the
shipping and handling cost, these are heavy machines).

The reason why they're so expensive is because so few of them have
been made. You know, demand and offer...

For a time it was possible to buy instead alpha hardware and the
Genera VM running on alpha. Unfortunately, since the symbolics.com
domain has been sold to a blogger, I don't know where you could obtain
it from.
Post by Alejandro F. Reimondo
However, what I read in [1] is that the systems inherently were
unstable in terms of dynamic reconfiguration (Ale's main point about
openness). I personally believe any system should inherently
support superstabilization in its core. Superstabilization is
generalization of Dijkstra's definition of system stability.
It was definitely possible to break it, but then you can also break
your linux kernel and try to reboot. Or just write to /proc/kmem as
root...

However, I've been told that they had as good uptimes as unix systems
if not better, and that their network services weren't as susceptible
to external attack as on unix systems.
Post by Alejandro F. Reimondo
If necessary. I can quote pages from this book that mentions the
instability of reconfiguring the system.
[1] LISP Lore: A Guide to Programming the LISP Machine by H.
Bromley, Richard Lamson ISBN-13: 978-0898382280
--
__Pascal Bourguignon__
http://www.informatimago.com
Dan Amelang
2010-02-27 08:08:13 UTC
Permalink
Hi John,

Although I am a VPRI employee and work on the STEPS project, the
following is not an official position of the organization nor a
definitive guide to Alan Kay's views.

That said, I hope I can help clarify things somewhat.
...one of the three key stumbling blocks to building real
"software engineering" solutions -- size.
But I am not convinced VPRI really has a solution to the remaining two
stumbling blocks: complexity and trustworthiness.
I don't think anyone on the project is interested in reducing size
w/out reducing complexity. We're far more interested in the latter,
and how the former helps us gauge and tame the latter.
I've read about Smalltalk and the history of its development, it appears the
earliest version of Smalltalk I could read about/heard of, Smalltalk-72,
used an actors model for message passing.  While metaobjects allow
implementation hiding, so do actors.  Actors seems like a far better
solution
FWIW, Alan likes (somewhat) the Erlang process model of execution and
has said how in some ways it is closer to his original idea of how
objects should behave.

(Regarding your puzzling over Alan's views, though, you might want to
try emailing him directly. After you've done due diligence reading up
on the subject, of course.)
But it seems
way more pure than AMOP because a model-driven compiler necessarily will
bind things as late as necessary, in part thanks to a clockless, concurrent,
asynchronous execution model.
See above.
UNIX hit a blocking point almost immediately due
to its process model, where utility authors would tack on extra functions to
command-line programs like cat.  This is where Kernighan and Pike coined the
term "cat -v Considered Harmful", becaise cat had become way more than just
a way to concatenate two files.  But I'd argue what K&P miss is that the
UNIX process model, with pipes and filters as composition mechanisms on
unstructured streams of data, not only can't maximize performance,
The ability of a given programming model to "maximize performance" is
not a major draw for me. I just want "fast enough," which rarely
requires maximum performance, in my experience.
it can't
maximize modularity,
Ditto. Both of these are important, but the idea of maximizing one
attribute of a system is not so appealing to me.
because once a utility hits a performance wall, a
programmer goes into C and adds a new function to a utility like cat so that
the program does it all at once.
If only "cat" itself were designed in a more modular way, using a more
modular programming model. Then maybe adding optimizations as
necessary wouldn't be so bad. In that case, maybe the UNIX process
model and pipes aren't to blame?

Regardless, even in an ideal system, the need to peel away layers to
get better performance might only be reduced and never fully
eliminated.
  So utilities naturally grow to become
monolithic.  Creating Plan9 and Inferno Operating Systems just seem like
incredibly pointless from this perspective, and so does Google's Go
Programming Language (even the tools for Go are monolithic).
Interesting related work: Butler Lampson on monolithic software
components. This stuff is worth "drinking deeply" from, IMO (as
opposed to skimming)

http://research.microsoft.com/en-us/um/people/blampson/Slides/ReusableComponentsAbstract.htm

http://research.microsoft.com/en-us/um/cambridge/events/needhambook/videos/1032/head.wmv
Apart from AMOP, Alan has not really said much about what interests him and
what doesn't interest him.  He's made allusions to people writing OSes in
C++.
I think this is a Red Herring. I don't think that Alan really thinks
that writing an OS is C++ is a good idea. But you should go to the
source to understand what he meant.
So I've been looking around, asking, "Who is
competing with VPRI's FONC project?"
So the projects you mention are interesting, but they seem to be
missing a major component of the STEPS project: to actually build a
real, practical personal computing system.
What do FONC people like Alan and Ian have to
say?
I may have disappointed as I am not a FONC person like Alan or Ian.
But I hope I was helpful.

Dan
Reuben Thomas
2010-02-28 14:37:57 UTC
Permalink
Post by Dan Amelang
(Regarding your puzzling over Alan's views, though, you might want to
try emailing him directly. After you've done due diligence reading up
on the subject, of course.)
Although it would be of far greater value if such an exchange took
place in public, e.g. on this list.

VPRI seems really bad at actually getting publicity for its work: much
of the most interesting stuff, like FONC's COLA/idst, isn't even
widely available in Linux distributions, or even packaged as source
from an obvious place, which is pretty much the minimum requirement
for getting the attention of all but the determined and/or really
interested. There isn't even a "software" link on the home page of
vpri.org, and the projects directly linked to on the "Our work" page
did not originate at VPRI (Squeak Etoys & Croquet). It takes another
two clicks to get to the FONC wiki, from which the closest thing to
code is a link to the SVN repo. Sigh...

(Observation: although I quickly checked what I wrote above, it may
not be 100% accurate. It doesn't matter: almost no-one I talk to has
heard of any of this stuff.)
--
http://rrt.sc3d.org
Brian Gilman
2010-02-28 15:29:03 UTC
Permalink
Post by Reuben Thomas
Post by Dan Amelang
(Regarding your puzzling over Alan's views, though, you might want to
try emailing him directly. After you've done due diligence reading up
on the subject, of course.)
Although it would be of far greater value if such an exchange took
place in public, e.g. on this list.
I agree, the discussion was interesting, it would be a shame if it was continued on a back-channel.
Post by Reuben Thomas
VPRI seems really bad at actually getting publicity for its work: much
of the most interesting stuff, like FONC's COLA/idst, isn't even
widely available in Linux distributions, or even packaged as source
from an obvious place, which is pretty much the minimum requirement
for getting the attention of all but the determined and/or really
interested...
Right now the barrier for anyone interested in the project is absurdly high.

After hearing about the project, I downloaded the source, and attempted to compile on OS X, which wouldn't compile. I went as far as installing a Ubuntu image in VMWare, just for the sake of trying to get fonc to compile. It compiled, but then jolt2 gave a segmentation fault whenever I tried to use it. I did some research and noticed that the seg faults were probably related to the fact that I have a newer CPU in my machine, but at that point I felt it would be best to cut my losses and move on.

That having been said, I think the project is an interesting one, but I'm not sure it's really ready for tons of publicity yet.
Reuben Thomas
2010-02-28 16:50:43 UTC
Permalink
Post by Brian Gilman
That having been said, I think the project is an interesting one, but I'm
not sure it's really ready for tons of publicity yet.
Think of a software project as like Plato's model of the soul as a
charioteer with two horses, one immortal and one mortal, only without
the goal of reaching heaven. The mortal horse is the imperatives of
the real world: developers, money, users, releases and so on, while
the immortal horse represents elegance, simplicity, performance,
design perfection. A successful project usually manages to keep the
two horses in relative harmony, making something good and practical.
VPRI seems to have started off with just the immortal horse (or, if
you take the view that the project's members are gods, two immortal
horses).

In other words, I think you have it the wrong way round: it is
precisely by caring about one's public that one fixes the rough edges
so that the code is releasable and usable even when it's not finished
(and it never is). This is the whole point of "release early, release
often": stay in touch with the real world.

I think it's scandalous that a publically-funded non-secret project
does not have far stricter requirements for public engagement than are
apparent here.

I would add that the reason I care is because I have a great deal of
respect for Ian Piumarta in particular: I was blown away by his
Virtual Virtual Machine work when I went to INRIA Rocquencourt in
1999, greatly impressed by his code generation work on Smalltalk (at
least that did get out the door), and really excited when I first came
across COLA. This stuff should be out there!
--
http://rrt.sc3d.org
Andrey Fedorov
2010-02-28 17:53:14 UTC
Permalink
Considering the ambition of the project relative to its resources, I think
it's reasonable for STEPS to keep a low profile and spend less effort on
"educating" than one might like.

That said, I'd appreciate a simple "suggested reading" list for independent
study - in my case, for someone with an undergrad in CS.

*That* said, this section <http://vpri.org/html/writings.php> is wonderful.

Cheers,
Andrey
Post by Reuben Thomas
Post by Brian Gilman
That having been said, I think the project is an interesting one, but I'm
not sure it's really ready for tons of publicity yet.
Think of a software project as like Plato's model of the soul as a
charioteer with two horses, one immortal and one mortal, only without
the goal of reaching heaven. The mortal horse is the imperatives of
the real world: developers, money, users, releases and so on, while
the immortal horse represents elegance, simplicity, performance,
design perfection. A successful project usually manages to keep the
two horses in relative harmony, making something good and practical.
VPRI seems to have started off with just the immortal horse (or, if
you take the view that the project's members are gods, two immortal
horses).
In other words, I think you have it the wrong way round: it is
precisely by caring about one's public that one fixes the rough edges
so that the code is releasable and usable even when it's not finished
(and it never is). This is the whole point of "release early, release
often": stay in touch with the real world.
I think it's scandalous that a publically-funded non-secret project
does not have far stricter requirements for public engagement than are
apparent here.
I would add that the reason I care is because I have a great deal of
respect for Ian Piumarta in particular: I was blown away by his
Virtual Virtual Machine work when I went to INRIA Rocquencourt in
1999, greatly impressed by his code generation work on Smalltalk (at
least that did get out the door), and really excited when I first came
across COLA. This stuff should be out there!
--
http://rrt.sc3d.org
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Reuben Thomas
2010-02-28 21:48:18 UTC
Permalink
Post by Andrey Fedorov
Considering the ambition of the project relative to its resources, I think
it's reasonable for STEPS to keep a low profile and spend less effort on
"educating" than one might like.
A software research project that does not aggressively push its code
out is a waste of time. Many quite possibly excellent ideas have sunk
in the past few decades for lack of exposure. ("Quite possibly"
because without that exposure it's well-night impossible to tell how
good they are.)
Post by Andrey Fedorov
*That* said, this section is wonderful.
Thanks very much for that pointer. Interesting reading, but code is
worth more...
--
http://rrt.sc3d.org
Dan Amelang
2010-02-28 22:44:14 UTC
Permalink
Post by Reuben Thomas
Post by Andrey Fedorov
Considering the ambition of the project relative to its resources, I think
it's reasonable for STEPS to keep a low profile and spend less effort on
"educating" than one might like.
A software research project that does not aggressively push its code
out is a waste of time.
We'll have to agree to disagree, then. My understanding of the history
of computer science does not seem to line up with this assertion,
though.

Dan
Dan Amelang
2010-02-28 22:42:10 UTC
Permalink
Post by Andrey Fedorov
Considering the ambition of the project relative to its resources, I think
it's reasonable for STEPS to keep a low profile and spend less effort on
"educating" than one might like.
Thank you :) We do have limited resources and wild ambitions. And I
won't be able to answer emails as thoroughly as I am today for that
reason.
Post by Andrey Fedorov
That said, I'd appreciate a simple "suggested reading" list for independent
study - in my case, for someone with an undergrad in CS.
A reasonable suggestions. Besides the list on the vpri website, you
could also look at the references in the writings. Also, Alan likes to
give people references to read, so you could try him, and report back
here (with his permission).

Dan
Dan Amelang
2010-02-28 22:38:57 UTC
Permalink
Post by Reuben Thomas
Think of a software project as like Plato's model of the soul as a
charioteer with two horses, one immortal and one mortal, only without
the goal of reaching heaven. The mortal horse is the imperatives of
the real world: developers, money, users, releases and so on, while
the immortal horse represents elegance, simplicity, performance,
design perfection. A successful project usually manages to keep the
two horses in relative harmony, making something good and practical.
VPRI seems to have started off with just the immortal horse
This could well be. How else should an ambitious research project start off?

Research in general involves incubating fragile ideas that might not
be ready to face what you call the "real world" (assuming earth is
more real than heaven :)
Money, users, releases, etc.
Post by Reuben Thomas
In other words, I think you have it the wrong way round: it is
precisely by caring about one's public that one fixes the rough edges
One man's rough edge is another's great idea in the making :)
Post by Reuben Thomas
I think it's scandalous that a publically-funded non-secret project
does not have far stricter requirements for public engagement than are
apparent here.
Scandalous! :) Actually, in my experience, many publically (sic)
-funded projects don't have public repositories that are updated in
real-time (like many of ours are). So the scandal may be more
widespread than we initially suspected!
Post by Reuben Thomas
I would add that the reason I care is because I have a great deal of
respect for Ian Piumarta in particular: I was blown away by his
Virtual Virtual Machine work when I went to INRIA Rocquencourt in
1999, greatly impressed by his code generation work on Smalltalk (at
least that did get out the door), and really excited when I first came
across COLA. This stuff should be out there!
Ian does do great stuff. And much of his work is out there:

http://piumarta.com/software/

And there is more coming. But please consider what I said about
incubating great ideas.

Dan
Reuben Thomas
2010-02-28 22:44:06 UTC
Permalink
Post by Dan Amelang
Post by Reuben Thomas
I think it's scandalous that a publically-funded non-secret project
does not have far stricter requirements for public engagement than are
apparent here.
Scandalous!
Oh dear, I was simultaneously wearing my Victorian and my tax-payer's
hat (though I pay taxes to a government that spends rather less on
research than the US government). Still...
Post by Dan Amelang
:) Actually, in my experience, many publically (sic)
-funded projects don't have public repositories that are updated in
real-time (like many of ours are). So the scandal may be more
widespread than we initially suspected!
...this really is a scandal. Nothing to do with VPRI, though.
Post by Dan Amelang
please consider what I said about incubating great ideas.
Karl Ramberg
2010-03-02 09:43:14 UTC
Permalink
Post by Dan Amelang
Post by Reuben Thomas
Think of a software project as like Plato's model of the soul as a
charioteer with two horses, one immortal and one mortal, only without
the goal of reaching heaven. The mortal horse is the imperatives of
the real world: developers, money, users, releases and so on, while
the immortal horse represents elegance, simplicity, performance,
design perfection. A successful project usually manages to keep the
two horses in relative harmony, making something good and practical.
VPRI seems to have started off with just the immortal horse
This could well be. How else should an ambitious research project start off?
Research in general involves incubating fragile ideas that might not
be ready to face what you call the "real world" (assuming earth is
more real than heaven :)
Money, users, releases, etc.
Post by Reuben Thomas
In other words, I think you have it the wrong way round: it is
precisely by caring about one's public that one fixes the rough edges
One man's rough edge is another's great idea in the making :)
Post by Reuben Thomas
I think it's scandalous that a publically-funded non-secret project
does not have far stricter requirements for public engagement than are
apparent here.
Scandalous! :) Actually, in my experience, many publically (sic)
-funded projects don't have public repositories that are updated in
real-time (like many of ours are). So the scandal may be more
widespread than we initially suspected!
Post by Reuben Thomas
I would add that the reason I care is because I have a great deal of
respect for Ian Piumarta in particular: I was blown away by his
Virtual Virtual Machine work when I went to INRIA Rocquencourt in
1999, greatly impressed by his code generation work on Smalltalk (at
least that did get out the door), and really excited when I first came
across COLA. This stuff should be out there!
http://piumarta.com/software/
And there is more coming. But please consider what I said about
incubating great ideas.
Dan
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Hi

The other research that are based on the Moshi image equally
interesting, but the Moshi image is nowhere to be downloaded so
one can only read the code and papers about it.

http://tinlizzie.org/updates/exploratory/updates/

Karl
Michael Haupt
2010-02-28 21:57:38 UTC
Permalink
Brian,
Post by Brian Gilman
After hearing about the project, I downloaded the source, and attempted to compile on OS X, which wouldn't compile.
any details?

Best,

Michael
--
Dr.-Ing. Michael Haupt ***@hpi.uni-potsdam.de
Software Architecture Group Phone: ++49 (0) 331-5509-542
Hasso Plattner Institute for Fax: ++49 (0) 331-5509-229
Software Systems Engineering http://www.hpi.uni-potsdam.de/swa/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany

Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel
Brian Gilman
2010-03-01 01:49:01 UTC
Permalink
http://vpri.org/mailman/private/fonc/2009/001145.html

cc1: warnings being treated as errors

CodeGenerator-local.o.c: In function
‘DynamicIntel32CodeGenerator__jeL_’:
CodeGenerator-local.o.c:4918: warning: value computed is not used
CodeGenerator-local.o.c: In function
‘DynamicIntel32CodeGenerator__jgeL_’:
CodeGenerator-local.o.c:4934: warning: value computed is not used
CodeGenerator-local.o.c: In function
‘DynamicIntel32CodeGenerator__jmpL_’:
CodeGenerator-local.o.c:4966: warning: value computed is not used
CodeGenerator-local.o.c: In function
‘DynamicIntel32CodeGenerator__jneL_’:
CodeGenerator-local.o.c:4982: warning: value computed is not used
make[2]: *** [CodeGenerator-local.o] Error 1
make[1]: *** [all] Error 2
make: *** [all] Error 2

If memory serves me correctly, I tried disabling warnings as errors, but
then ran into other issues.

This is on Snow Leopard, running Xcode 3.2 beta 1, had the same issue with
Xcode 3.1.
I tried both repos, as well as the source tarball that's posted.

On Mon, Mar 1, 2010 at 5:57 AM, Michael Haupt <
Post by Michael Haupt
Brian,
Post by Brian Gilman
After hearing about the project, I downloaded the source, and attempted
to compile on OS X, which wouldn't compile.
any details?
Best,
Michael
--
Software Architecture Group Phone: ++49 (0) 331-5509-542
Hasso Plattner Institute for Fax: ++49 (0) 331-5509-229
Software Systems Engineering http://www.hpi.uni-potsdam.de/swa/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany
Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
John Zabroski
2010-03-01 15:09:18 UTC
Permalink
Folks,

There are simply way too many streams of thought in this one thread. Please
separate bug reports from this discussion. Please be highly discriminatory
in labeling your content so that it is relevant to the subject at hand. I
am perhaps interested in all these subjects, but French cuisine and sushi
should not be mixed.

I anticipate this will be a busy week, so I will try to print out and
highlight the major themes of discussion in this thread that seem directly
related to my original post. Then I will post again.

Take care and best regards,
Z-Bo
Post by Brian Gilman
http://vpri.org/mailman/private/fonc/2009/001145.html
cc1: warnings being treated as errors
CodeGenerator-local.o.c: In function
CodeGenerator-local.o.c:4918: warning: value computed is not used
CodeGenerator-local.o.c: In function
CodeGenerator-local.o.c:4934: warning: value computed is not used
CodeGenerator-local.o.c: In function
CodeGenerator-local.o.c:4966: warning: value computed is not used
CodeGenerator-local.o.c: In function
CodeGenerator-local.o.c:4982: warning: value computed is not used
make[2]: *** [CodeGenerator-local.o] Error 1
make[1]: *** [all] Error 2
make: *** [all] Error 2
If memory serves me correctly, I tried disabling warnings as errors, but
then ran into other issues.
This is on Snow Leopard, running Xcode 3.2 beta 1, had the same issue with
Xcode 3.1.
I tried both repos, as well as the source tarball that's posted.
On Mon, Mar 1, 2010 at 5:57 AM, Michael Haupt <
Post by Michael Haupt
Brian,
Post by Brian Gilman
After hearing about the project, I downloaded the source, and attempted
to compile on OS X, which wouldn't compile.
any details?
Best,
Michael
--
Software Architecture Group Phone: ++49 (0) 331-5509-542
Hasso Plattner Institute for Fax: ++49 (0) 331-5509-229
Software Systems Engineering http://www.hpi.uni-potsdam.de/swa/
Prof.-Dr.-Helmert-Str<http://www.hpi.uni-potsdam.de/swa/Prof.-Dr.-Helmert-Str>.
2-3, D-14482 Potsdam, Germany
Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Dan Amelang
2010-02-28 22:16:50 UTC
Permalink
(standard disclaimer: I don't represent the official stance of VPRI or Alan Kay)
Post by Reuben Thomas
Post by Dan Amelang
(Regarding your puzzling over Alan's views, though, you might want to
try emailing him directly. After you've done due diligence reading up
on the subject, of course.)
Although it would be of far greater value if such an exchange took
place in public, e.g. on this list.
Sure. FYI, Alan may not be on this list. Of course, one can write him
and invite him to participate in a discussion here about clarifying
certain views of his.

Either way, I suggest going to the source and clarifying before
drawing conclusions.
Could be. Obviously this is not a top goal at this point in the
project. Getting publicity isn't always good (yes, I'm familiar with
the popular phrase :). And even when you want it, it does take effort.
And it commits you to a certain extent, because people want to have a
consistent story, so backtracking is harder. But often to make
progress, you have to change your mind. Also, in the exploratory
stages of an ambitious project, you don't want to get bogged down
handling bug reports.

Things may very well change later in the project when it might make
more sense to "productize" the research.
Post by Reuben Thomas
There isn't even a "software" link on the home page of
vpri.org,
That may be a mistake, as the link _does_ show up on the other pages
Post by Reuben Thomas
and the projects directly linked to on the "Our work" page
did not originate at VPRI (Squeak Etoys & Croquet).
It was the pretty much the same group of people, but the group has
been hosted by different organizations over the years (Disney, HP,
etc.)

Dan
Reuben Thomas
2010-02-28 22:21:38 UTC
Permalink
Post by Dan Amelang
(standard disclaimer: I don't represent the official stance of VPRI or Alan Kay)
Post by Reuben Thomas
and the projects directly linked to on the "Our work" page
did not originate at VPRI (Squeak Etoys & Croquet).
It was the pretty much the same group of people, but the group has
been hosted by different organizations over the years (Disney, HP,
etc.)
Indeed, but all this stuff is rather old now. That it's still in the
headlines is worrying.
--
http://rrt.sc3d.org
Dan Amelang
2010-02-28 22:54:31 UTC
Permalink
Post by Reuben Thomas
Post by Dan Amelang
(standard disclaimer: I don't represent the official stance of VPRI or Alan Kay)
Post by Reuben Thomas
and the projects directly linked to on the "Our work" page
did not originate at VPRI (Squeak Etoys & Croquet).
It was the pretty much the same group of people, but the group has
been hosted by different organizations over the years (Disney, HP,
etc.)
Indeed, but all this stuff is rather old now. That it's still in the
headlines is worrying.
Obviously one's definition of "old" factors into the discussion.

I'm more worried about how all the supposedly "new stuff" dominates headlines :)

Dan
Andrey Fedorov
2010-02-27 08:32:37 UTC
Permalink
John,

Have you been able to find any good definitions for your use of
"trustworthiness"? The wikipedia article
about trustworthy computing [1] makes it sound like something which
originated in Microsoft's marketing department.

Using intuitive definitions, the three metrics you mention seem to
be synonymous. Code size, when measured in "the number of thoughts it takes
to conceptualize it", is synonymous to "complexity". As far as the right way
to write "trustworthy" code, the only convincing argument I've heard is from
SPJ:

"Tony Hoare has this wonderful turn of phrase in which he says your code
should obviously have no bugs rather than having no obvious bugs. So for me
I suppose beautiful code is code that is obviously right. It's kind of
limpidly transparent." -Simon Peyton Jones, from Peter Seibel's "Coders At
Work"
Just keep it as simple (and short) as possible.

Cheers,
Andrey

1. http://en.wikipedia.org/wiki/Trustworthy_Computing
I've been following this project for a long time, and only recently joined
the mailing list.
For a long time, I did not fully understand Alan Kay's thoughts on software
architecture, despite reading many of his press interviews and watching his
public presentations. What I've come to feel is that Alan has a partially
complete vision, and some inconsistent viewpoints likely blocking a complete
vision of computer science.
For example, I had heard Alan refer to Lisp as Maxwell's Equations of
computer science, but did not fully grasp what that really meant. When I
first played around with OMeta, I described it to a friend at MIT as
"ridiculously meta". This idea was pretty much confirmed by Ian Piumarta's
"widespread unreasonable behavior" whitepaper, which basically argues that
we can't truly do "software engineering" until we actually know what that
means, so the best approach to go with is extremely late-binding. The idea
to use syntax-directed interpretation via PEG is an obvious way to achieve
this, as it addresses one of the three key stumbling blocks to building real
"software engineering" solutions -- size.
But I am not convinced VPRI really has a solution to the remaining two
stumbling blocks: complexity and trustworthiness.
In terms of complexity, I think I'll refer back to Alan Kay's 1997 OOPSLA
speech, where he talks about doghouses and cathedrals. Alan mentions Gregor
Kiczales' The Art of the Meta Object Protocol as one of the best books
written in the past 10 years on OOP-work. I don't really understand this,
because AMOP is entirely about extending the block-structured, procedural
Reuben Thomas
2010-02-28 14:30:35 UTC
Permalink
These three physical coupling issues
(block-structured, procedural message passing; manual memory management;
manual concurrency) are things the average programmer should never have to
touch,
I don't remember seeing block struturing ever being described as one
of the "things the average programmer should never have to touch";
could you elaborate on how it's bad, please?
--
http://rrt.sc3d.org
Kurt Stephens
2010-02-28 20:09:58 UTC
Permalink
Post by Reuben Thomas
These three physical coupling issues
(block-structured, procedural message passing; manual memory management;
manual concurrency) are things the average programmer should never have to
touch,
I don't remember seeing block struturing ever being described as one
of the "things the average programmer should never have to touch";
could you elaborate on how it's bad, please?
I agree with Reuben here. Languages that elevate block-structures to
first-class status: Smalltalk and Self (and even Ruby), lead to less
coupling and greater expressiveness -- they are the poor-man's lambda.
Lack of such first-class constructs make languages like Java completely
unpalatable, even with its anonymous classes. I hope that modern
environments allow more specialization of "block-structure" semantics.
What would replace blocks in Smalltalk? I would like to subclass and
decompose them.

KAS
Reuben Thomas
2010-02-28 21:46:16 UTC
Permalink
Post by Kurt Stephens
Post by Reuben Thomas
These three physical coupling issues
(block-structured, procedural message passing; manual memory management;
manual concurrency) are things the average programmer should never have to
touch,
I don't remember seeing block struturing ever being described as one
of the "things the average programmer should never have to touch";
could you elaborate on how it's bad, please?
I agree with Reuben here.
I should point out that I'm not disagreeing with the assertion that
block structuring is bad; rather, just that to me it's always been
axiomatically an attribute of most non-trivial programming languages,
neither good nor bad, but in fact unexamined. Hence, I was intrigued
to see it mentioned as a bad thing, especially so prominently.
--
http://rrt.sc3d.org
Loading...