Although I am a VPRI employee and work on the STEPS project, the
following is not an official position of the organization nor a
definitive guide to Alan Kay's views.
That said, I hope I can help clarify things somewhat.
...one of the three key stumbling blocks to building real
"software engineering" solutions -- size.
But I am not convinced VPRI really has a solution to the remaining two
stumbling blocks: complexity and trustworthiness.
I don't think anyone on the project is interested in reducing size
w/out reducing complexity. We're far more interested in the latter,
and how the former helps us gauge and tame the latter.
I've read about Smalltalk and the history of its development, it appears the
earliest version of Smalltalk I could read about/heard of, Smalltalk-72,
used an actors model for message passing. While metaobjects allow
implementation hiding, so do actors. Actors seems like a far better
FWIW, Alan likes (somewhat) the Erlang process model of execution and
has said how in some ways it is closer to his original idea of how
objects should behave.
(Regarding your puzzling over Alan's views, though, you might want to
try emailing him directly. After you've done due diligence reading up
on the subject, of course.)
But it seems
way more pure than AMOP because a model-driven compiler necessarily will
bind things as late as necessary, in part thanks to a clockless, concurrent,
asynchronous execution model.
UNIX hit a blocking point almost immediately due
to its process model, where utility authors would tack on extra functions to
command-line programs like cat. This is where Kernighan and Pike coined the
term "cat -v Considered Harmful", becaise cat had become way more than just
a way to concatenate two files. But I'd argue what K&P miss is that the
UNIX process model, with pipes and filters as composition mechanisms on
unstructured streams of data, not only can't maximize performance,
The ability of a given programming model to "maximize performance" is
not a major draw for me. I just want "fast enough," which rarely
requires maximum performance, in my experience.
Ditto. Both of these are important, but the idea of maximizing one
attribute of a system is not so appealing to me.
because once a utility hits a performance wall, a
programmer goes into C and adds a new function to a utility like cat so that
the program does it all at once.
If only "cat" itself were designed in a more modular way, using a more
modular programming model. Then maybe adding optimizations as
necessary wouldn't be so bad. In that case, maybe the UNIX process
model and pipes aren't to blame?
Regardless, even in an ideal system, the need to peel away layers to
get better performance might only be reduced and never fully
So utilities naturally grow to become
monolithic. Creating Plan9 and Inferno Operating Systems just seem like
incredibly pointless from this perspective, and so does Google's Go
Programming Language (even the tools for Go are monolithic).
Interesting related work: Butler Lampson on monolithic software
components. This stuff is worth "drinking deeply" from, IMO (as
opposed to skimming)
Apart from AMOP, Alan has not really said much about what interests him and
what doesn't interest him. He's made allusions to people writing OSes in
I think this is a Red Herring. I don't think that Alan really thinks
that writing an OS is C++ is a good idea. But you should go to the
source to understand what he meant.
So I've been looking around, asking, "Who is
competing with VPRI's FONC project?"
So the projects you mention are interesting, but they seem to be
missing a major component of the STEPS project: to actually build a
real, practical personal computing system.
What do FONC people like Alan and Ian have to
I may have disappointed as I am not a FONC person like Alan or Ian.
But I hope I was helpful.