On Fri, Oct 8, 2010 at 5:04 PM, Paul D. Fernhout<
Post by Paul D. Fernhout
But, the big picture issue I wanted to raise isn't about prototypes. It as
about more general issues -- like how do we have general tools that let us
look at all sorts of computing abstractions?
In biology, while it's true there are now several different types of
microscopes (optical, electron, STM, etc.) in general, we don't have a
special microscope developed for every different type of organism we want to
look at, which is the case now with, say, debuggers and debugging processes.
So, what would the general tools look like to let us debug anything? And
I'd suggest, that would not be "gdb" as useful as that might be.
Computer scientists stink at studying living systems. Most computer
scientists have absolutely zero experience studying and programming living
systems. When I worked at BNL, I would have lunch with a biologist who was
named after William Tecumseh Sherman, who wrote his Ph.D. at NYU about
making nano-organisms dance. That's the level of understanding and
practical experience I am talking about.
Was that a biologist or computer scientist?
I was in a PhD program in ecology and evolution for a time (as was my wife,
where we met) and I can indeed agree there is a big difference in how people
think about some things when they have an Ecology and Evolution background,
because those fields are taught so badly (or not at all) in US schools
(especially evolution). So, if you want to understand issues like
complexity, our best models connect to ecology and evolution, but since so
many CS types don't have that "soft and squishy" background, they just have
no good metaphors to use for that. But one might make similar arguments
about the value of the humanities and narrative in understanding complexity.
Math itself is valuable, but it is also limited in many ways.
"Studying Those Who Study Us: An Anthropologist in the World of
Artificial Intelligence" by Diane Forsythe
"[For a medical example] To build effective online health systems for
end-users one must combine the knowledge of a medical professional, the
skills of a programmer/developer, the perspective of a medical
anthropologist, and the wisdom of Solomon. And since Solomon is not
currently available, an insightful social scientist like Diana-who can help
us see our current healthcare practices from a 'man-from-mars'
perspective-can offer invaluable insights. ... Both builders and users of
[CHI] systems tend to think of them simply as technical tools or
problem-solving aids, assuming them to be value-free. However, observation
of the system-building process reveals that this is not the case: the
reasoning embedded in such systems reflects cultural values and disciplinary
assumptions, including assumptions about the everyday world of medicine."
So, one may ask, what "values" are built in to so many of the tools we use?
As for making things debuggable, distributed systems have a huge need for
compression of communication, and thus you can't expect humans to debug
compressed media. You need a way to formally prove that when you uncompress
the media, you can just jump right in and debug it.
Oh, I'm not interested in proving any thing in a CS sense. :-)
And debugging is only part of the issues. Part of it is also testing and
learning and probing (which could be done as long as you have some
consistent way of talking to the system under study -- consistent on your
end as a user, even if you might, as Michael implies, need specialized
backends for each system).
And, I'd add, all the stuff that surrounds that process, like making a
hypothesis and testing it, as people do all the time when debugging, could
There have been
advances in compiler architecture geared towards this sort of thinking, such
as the logic of bunched implications viz a viz Separation Logic and even
more practical ideas towards this sort of thinking, such as Xavier LeRoy's
now famous compiler architecture for proving optimizing compiler
correctness. The sorts of transformations that an application compiler like
GWT makes are pretty fancy, and if you want to look at the same GWT
application without compression today and just study what went wrong with
it, you can't.
I don't know about GWT, but Google's Closure claims to have a Firebug module
that lets you debug somewhat optimized code:
"You can use the compiler with Closure Inspector, a Firebug extension that
makes debugging the obfuscated code almost as easy as debugging the
Squeak had a mode where, with saving only some variable names, it
essentially could decompress compiled code into fairly readable source.
In general, to make systems multilingual friendly, we need various ways to
map names onto variables as a sort of "skin" the same way people put skins
over GUI MP3 players. :-)
Semantic web sorts of ideas might help there -- if you can abstract ideas to
some level, you can then present them with appropriate language terms. For
example, a "for" loop in computing might show up in whatever the
programmer's preferred language might be, including a Chinese pictograph.
So, when you looked at someone else's code, it might be translated to your
preferred language and style (maybe even the comments to some degree). That
might help programmers across the world to collaborate better and also end
some of the dominance of English, as well as end the problem of code that
becomes a mismash of keywords in one language and variable names in another.
What you need to do, in my humble opinion, is focus on
proving that mappings between representations is isomorphic and non-lossy,
even if one representation needs hidden embeddings (interpreted as no-ops by
a syntax-directed compiler) to map back to the other.
As above, it depends what you are trying to accomplish. Sure, in an ideal
world maybe. But CS is littered with decades of projects about "proving"
things and personally I have not seen that any of it has contributed one bit
to the state of the art of computing (outside maybe a very few narrow
domains like cryptography). Of course, I was never a CS major, so I can't
say I know everything there. I'm sure some people really like to do proofs
-- I enjoyed it for a time, as I always liked puzzles and things.
Most of what gets done with computers seems to me to be just people like Dan
Ingalls or Linus Torvalds forging ahead, in what Alan Kay called a motto of
"You just do it and it is done." :-) Although that motto may be a little
different in a group or at different stages of a project's evolution.
But sure, ideally, a system should be able to do what you suggest, or at
least support you well as you try to do it yourself.
Maybe "debugging" is the wrong term for what I am thinking or feeling about?
It's more like some coherent way to interact with a system where you are not
limited by what that specific system provides but can bring to bear all the
tools you have from outside.
An example of this is VirtualBox, as a virtual machine that emulates a
desktop computer. In theory, you could use any tools you have outside of
that emulator to push the mouse around, click on things, OCR the text that
it outputs, examine any running machine code and register states, monitor
network traffic it tries to put out, check what it is writing to its virtual
hard drives and so on.
You might use this capacity to debug the system. You might use it to test
that the system has certain behaviors or meets some standard. You might use
it to design or redesign or alter the system. You might use this capacity to
just learn about the system. You might use this capacity to use the system
as a module at a higher level. You might use this capacity to interact with
other people to have a discussion about the system in relation to each of
the previous things such as design or bugs (especially if you could capture
and replay what the system does).
As an analogy, a Scanning Tunneling Microsope can be used both to image a
surface of atoms and to move atoms around on that surface.
"IBM Scientists First To Measure Force Required To Move Individual Atoms"
About twenty-two years ago, I worked with someone at a small company who had
the idea of an executive program that would run on top of a PC to interact
with programs for you (we might call them agents now). Perhaps that idea has
just stuck in my mind. But certainly the entire realm of automated testing
tools has moved more and more in that direction.
In general, what I am proposing is breaking out of the "self hosting" model
of tools (as nice as that is) of mainstream Smalltalks, and into what
ParcPlace was trying to do with (was it?) "Van Gogh" in the mid 1990s with
firewalls between sets of objects, and what I tried to do with PataPata
where one world with its own window system could be used to inspect and
modify another world without changing its own structure significantly.
Example code for a system that has viewers defined as "worlds" of objects
that can operate on other worlds:
Like WorldInspector.py could operate on WorldDandelionGarden.py.
(It's not as abstract and decoupled as I would have liked -- like if data
went back an forth between them in some textual form like JSON).
There are also other fancy techniques being developed in programming
language theory (PLT) right now. Phil Wadler and Jeremy Siek's Blame
Calculus is a good illustration of how to study a living system in a
creative way (but does not provide a complete picture, akin to not knowing
you need to stain a slide before putting it under the microscope), and so is
Carl Hewitt's ActorScript and Direct Logic. These are the only efforts I am
aware of that try to provide some information on why something happened at
Thanks for the suggestions.
When I think about them, I can see we are talking about a system at two
different levels. You are sort of talking about what we we do with the
information we are getting about a system as we try to model it with some
representation and reason about it. And that's valuable, of course. Even
essential. I'm more thinking, how do we have a paradigm where we are
collecting the data and interacting with something so we could do some kind
of science on it (where when we have that data, then we could build the
representations you are referring to or try to validate them).
So, I think what you say fits into what I say, although what you are talking
about is sort of one approach of what to do with data collected -- which is
to formalize it and formally reason about it, as opposed to what I'm
thinking about which is more to discuss the data in an ad hoc or
semi-structured way. I can see that both approaches would be valuable. I
imagine, for the right "microscope" platform, people might write a whole
variety of systems useful for analyzing the data and suggesting what to look
Post by Paul D. Fernhout
I can usefully point the same microscope at a feather, a rock, a leaf, and
pond water. So' why can't I point the same debugger at Smalltalk image, a
emulated Debian installation, and a semantic web trying to understand a
I know that may sound ludicrous, but that's my point. :-)
What the examples I gave above have in common is that there are certain
limitations on how general you can make this, just as Oliver Heaviside
suggested we discard the balsa wood ship models for engineering equations
derived from Maxwell.
Again, as above, you are thinking about building a formal model. But most
things we deal with in science don't have a complete formal model (like the
Earth's climate, or a squirrel, or even how the influenza virus interacts
with vitamin D deficiency). Scientists may have formal models about all
these (and ofter more than one), but they are generally not complete because
the whole value of a model is the simplification. So, they are not
"isomorphic" models -- they are just *useful* models. And people then have
discussion about the models. That is what goes on in science.
No one expects a microscope to tell you the exact quantum states of every
wave/particle in the field of view. That's true even though STM microscopes
begin to approach that as a recent innovation -- but they presumably have a
very small field of view, too, literally I'm guessing just hundreds of atoms
But we've had almost 500 years or more of biology with microscopes or hand
lenses before getting to that stage, in any case. In that sense, gdb (the
GNU Debugger) is maybe like a roughly ground lens -- much better than
nothing in many cases, but still not a microscope of any great design, which
would require a bit more complexity (like, say, tracking your hypotheses
about bugs, or linking what you do to test cases, or allowing several people
to debug stuff together, or keeping traces of everything you tried and
letting your step backwards through all the code, etc.). Actually, I have
not used gdb in quite a while so I see they have added that last:
But can you use gdb to, say, debug medical problems? The question is absurd
right now of course (unless you medical problem is malfunctioning software
in a pacemaker), but will it still be such an absurd question in ten years,
to think that the general paradigm of debugging might be supported by more
general tools that have specialized backends, this one to talk to code, this
one to look at semantically tagged knowledge about health issues, all using
the sorts of formal processes that you imply in your suggestions?
In general, scientists (and engineers and designers) expect to be able to
use tools that give you some data about a system, and then you have to put
that data together in various ways (and more and more, the semantic web
plays a role in how communities may collectively do that).
Sure, someday people may get to the point were there is good general
agreement on rules of thumb for some design process (Maxwell's equations).
But they are still often rules of thumb in a sense that they may not reflect
what is going on in detail at the quantum level -- so they are a
simplification, as would be balsa wood models a simplification in a
Granted, some simplifications are more useful than others for specific
purposes. The equations might lead to better designs for hull shapes, but
they may not be as convincing to a potential funder of a project as a balsa
model they can play with. :-)
Post by Paul D. Fernhout
But when you think about it, there might be a lot of similarities at some
level in thinking about those four things in terms of displaying
information, moving between conceptual levels, maintaining to do lists,
doing experiments, recording results, communicating progress, looking at
dependencies, reasoning about complex topics, and so on. But right now, I
can't point one debugger at all those things, and even suggesting that we
could sounds absurd. Of course, most things that sound absurd really are
absurd, but still: "If at first, the idea is not absurd, then there is no
hope for it (Albert Einstein)"
In March, John Zabroski wrote: "I am going to take a break from the
previous thread of discussion. Instead, it seems like most people need a
tutorial in how to think BIG."
And that's what I'm trying to do here.
Thanks for the kind words. I have shared my present thoughts with you here.
You're welcome. Thanks for your comments.
But a tutorial in my eyes is more about providing people with a site where
they can go to and just be engulfed in big, powerful ideas. The FONC wiki
is certainly not that. Most of the interesting details in the project are
buried and not presented in exciting ways, or if they are, they are still
buried and require somebody to dig it up. That is a huge bug. In short,
the FONC wiki is not even a wiki. It is a chalkboard with one chalk stick,
and it is locked away in some teacher's desk.
Yes, I can see your point. Although the same is true for my own work. :-)
And that is another reason we need better tools in general. I've been
preparing a post in another context about history societies and
manufacturing where I say: "In general, history is something with a lot of
emotional value to many people (myself included) whether one wants to call
it values, sentimentality, conservativeness, art, or anything else. Humans
may have always had opposable thumbs for toolmaking, but chance are they
also have long been storytellers, and stories most often relate to
history... So, if to make is to be human, so is to think about history,
including the history of making. :-) And, I'd suggest, any *general*
infrastructure used to support that might also be useful to support cutting
edge technology development. You would think that cutting edge technology
development would support creation of cutting edge ways to organize all
that. But since so much of that cutting edge work is proprietary, and the
people doing it have already taken on so much risk in their R&D, maybe it
may be up to the people interested in history to build cutting edge systems
for looking at ecosystems of tools and processes?"
So, looking at the history of existing programming systems (like the late
Prof. Michael Mahoney at Princeton University did)
"Articles on the History of Computing"
could be one motivator to put together some of those sorts of tools (like to
provide the interfacing Michael mentioned), which then might be useful in
cutting edge endeavors that are documented as they go?
So, sure, it's a "bug". But what does that bug tell us about the
state-of-the-art in good communications? Are we still at the analogous point
in computing research where we are telling everyone to add a lot of comments
to their hand-optimized assembler code (as in, write nice stuff on the
Wiki)? Or can we conceive of some sort of semantic web about software
development where, like in Smalltalk, everything people do somehow becomes
more "self documenting" and "literate programming" with just the slightest
bit of effort and encouragement through using good tools designed to help
with that somehow?
What would such tools look like for sure, I don't know. I'm more posing the
question than saying I have the answers. However, these two tools from SRI
(which my wife helped a bit with incidentally on a joint project) give me
"Structured Evidential Argumentation System"
"SEAS is a software tool developed for intelligence analysts that records
analytic reasoning and methods that supports collaborative analysis across
contemporary and historical situations and analysts and has broad
applicability beyond intelligence analysis... The survival of an enterprise
often rests upon its ability to make correct and timely decisions, despite
the complexity and uncertainty of the environment. Because of the difficulty
of employing formal methods in this context, decision makers typically
resort to informal methods, sacrificing structure and rigor. We have
developed a new methodology that retains the ease-of-use, familiarity, and
(some of) the free-form nature of informal methods, while benefiting from
the rigor, structure, and potential for automation characteristic of formal
methods. Our approach aims to foster thoughtful and timely analysis through
the introduction of structure, and collaboration through access to the
corporate memory of current and past analytic results. ..."
"What is Angler"
"Angler is a tool that helps intelligence/policy professionals Explore,
understand, and overcome cognitive biases, and Collaboratively expand their
joint cognitive vision Through use of divergent & convergent thinking
techniques (such as brainstorming and clustering). Humans tend to bias the
analysis of situations based on their previous experiences and back-ground.
Angler is a tool to help analysts explore, understand, and overcome such
biases and to collaborate in expanding their joint cognitive vision. Angler
utilizes divergent and convergent techniques, such as brainstorming and
clustering or voting, to guide a diverse set of intelligence professionals
in completing a complex knowledge task. The tool helps the group through the
process of forming consensus, while preserving and quantifying differing
ways of thinking. Angler provides a Web-based collaborative environment that
allows users distributed by both time and geography to assemble in teams,
with the help of a facilitator. ... Cognitive Expansion: In the same way
that a pair of binoculars enhances the range of things the eye can see,
Angler aims to be a mental prosthetic device that enhances our problem
solving and decision analysis processes. The cognitive horizon of a person
or a group can be loosely defined as the transitive closure of possible
deductions starting from an initial set of assumptions. This horizon can be
narrowed by competing or contradicting hypotheses. In order to overcome
cognitive bias, we must recognize that we are in a situation where we are
experiencing cognitive bias. The recognition problem is handled in one of
two possible ways. The first possibility is to ignore it. SBP assumes that
we live in a complex nontractable world, and that in most situations
requiring formal decision making this assumption is correct. On the other
hand, complexity theoreticians divide the world into ordered and unordered
situations. They have guidelines to recognize in which part of the spectrum
a problem belongs. The identification of the space to which the problem
belongs will determine the type of tools used to solve it. Overcoming
cognitive bias is a process that has to do with understanding one’s own
assumptions and the problem’s decision landscape. The decision landscape is
composed of the actors and forces that can influence the decision, the
actions that the actors (not necessarily human) can take, and the certainty
(or uncertainty) we have about the outcome of those actions. One could claim
that a thorough investigation of those assumptions will yield an answer to
the problem of cognitive bias, but the problem is often more complex. We
live in an uncertain world, and a review of the assumptions cannot often be
thorough enough to consider all possibilities. ..."
Those are proprietary, but here is a related open source tool:
"Make the Case allows you to view, edit, and create cases for or against
My wife also has some free software she has written that touches on some of
these themes (mostly about narrative and perspectives):
But, when you think of it, what are you doing when you "debug" a piece of
code but building up some sort of story about how it should work in some
situation, and then comparing it with a story of how it actually did work in
So, I'm suggesting these sorts of tools might be useful when an individual
or a group of people are looking at any of:
* some bug in the Linux Kernel;
* the design of a new computer language (designs can have "bugs");
* economic policy to deal with the declining value of most human labor in an
ever-more-automated world also full of voluntary social networks and better
design (a "buggy" economic model), or
* whether to have a chocolate or vanilla wedding cake (or no cake and just
fruit salad and green smoothies) related to two programmers getting married,
where a big disagreement or just a nagging worry about doing the right thing
is a "bug" to be resolved. :-)
As I see it, those are all aspects of computing. :-) Even if they sometimes
involve the issue that: "The identification of the space to which the
problem belongs will determine the type of tools used to solve it."
Where, in software development, is there the fluid flow between these
different tools, except as something that a programmer learns by trial an
error? Where is the cognitive support? And then, from that, where is the
emerging narrative about what one is doing that can be shared and interwoven
with others? Without, as you imply, keeping some sort of Wiki up-to-date
manually when the people involved are using other tools for everything they do?
I'm not sure there is an easy answer to that, and certainly right now it
does take a lot of work to keep some good wiki site up-to-date if you want
to let people know what you are doing in a coherent way. Certainly that sort
of work is worth doing right now. I'm just asking more, are their ways that
such communications could be woven more into the tools people are using?
Maybe with some sort of "summarizing" process? Or some way of tagging what
someone is doing, or labeling parts of it, or extracting common or frequent
themes, or something like that? After all, Google does not have someone
work by hand to create a results page (or pages) for every possible search
term of interest -- there are algorithms somewhere that look at connections
and prioritize things and then present (presumably) customized results
(based perhaps on user interests or geography).
For example, why can't, say, Alan Kay make a list of ten important themes
and there might be up-to-date web pages presented as a large semantic
network evolves about FONC? Would that work (even just using Google), I
don't know. But it's sort of the beginning of an idea, anyway, just to link
it with your current interest in presenting Alan Kay's work (given as you or
someone else said the redundancy in various presentations). For a related
approach, imagine if the issue was not so much to rewrite or condense what
Alan Kay has said, as opposed to put it into some semantic web of ideas, and
then people could explore it, or one could see how often he emphasized
certain themes in presentations, and from those might flow a set of web
pages, or at least the outline of a summary that one could then tweak by
hand. I know it would be interesting to use such tools on, say, my own Sent
folder, to look for how to organize what I have written on to different
people and mailing lists, to look for common themes, URLs I frequently cite,
and so on.
Another related (new) open source project I just learned about that talks
about narrative and the semantic web btw is "Netention" (though is coming
more from the AI side of things):
"The portrayal of one’s life as an open-ended, revisable story, interwoven
with people that they know, and the people that they haven’t met (yet). ...
Netention solves, in general, all resource management and planning issues
that occur amongst communities of participants. It eliminates the
balkanization of various separate online services that presently serve
relatively narrow subsets of the more general “problem”."
Anyway, I have not used that software, but I can appreciate the aspiration.
Of course, there is always a tension between automation and better tools to
do stuff by hand. So, I don't want to come across as saying everything
should be automated. I think we need some mix of better tools and better
standards that better automation can work with, all developed within some
framework of humane values, oriented towards identifying and dealing with
major problems in our society or our individual lives and local communities
-- as a co-evolutionary process like Doug Engelbart talks about. That really
is the pressing need for future computing, IMHO. And I think a lot of the
previous work by Alan Kay, Dan Ingalls, and many others, as far as focusing
on empowerment, on learning, on fun, on sharing, on communicating ideas, on
simulation, and so on, can all support the humane side of what we need. It
is indeed easy to get lost in the bits and bytes and machine instructions
and endless abstractions and lose sight of the need for tools to
collaboratively build communities that are joyful, healthy, prosperous, and
intrinsically/mutually secure. I think in that sense the "personal"
computing aspect of future computing, while empowering on one level, is also
disempowering on another that relates to communities. We need both to
empower the individual and to empower the healthy community.
Clay Shirky is not perfect (no one is), but there are many good ideas here:
"A Group Is Its Own Worst Enemy "
"Clay Shirky on institutions vs. collaboration"
Especially for that last TED link with a video, what is the meaning to the
ideas there to FONC related to social tagging etc.? Which brings back the
semantic web (if you generalize it). But then what tools do we need to deal
with all the craziness going on in a social network, like having something
each are a specialized program and have information put on them in an adhoc
way without clear machine-readable licenses, or a situation where 98% of
email communications are spam but the other 2% are really essential but
where new communications efforts like Google Wave get discarded because they
are not email, and so on... These are the kind of crushing problems that the
future of computing has to deal with -- because they are crushing problems
now and people are having a tough time of them and probably will without
lots of innovation.
"the future of computing" relative to a decade or two ago: :-)
"Now, if you were here last time you’ll remember I went through the history
of everything that ever happened, starting with The Big Bang, going through
that was because understanding the context in which this stuff happens is
really important to understanding what we have now. Without that
understanding you’re consumed by mythology which has no truth in it, that
the history of innovation has been one thing after another where the new,
good thing always displaces the old stuff. That’s not how it works,
generally. Generally the most important new innovations are received with
contempt and horror and are accepted very slowly, if ever. That’s an
may be quirky things that are in the right place at the right time. How can
FONC be that right thing in the right place real soon now? I don't know, but
I'm trying to make some suggestions based on what I see as being important
in the next decade or so. That is why I suggest that applications of FONC
link to things like the semantic web, structured arguments, multiple
perspectives, as well as being part of microscopes/telescopes to deal with
other sorts of computer systems. And that is with the acknowledgment that
those other systems, compared to FONC's spinoffs might seem stupid and
broken, like, say stupid-and-broken PHP which has none-the-less dominated
much of the server side of the web, or say, VirtualBox running Windows 95
and legacy software, an equally stupid and broken system by today's
encoded semantic web stuff around, which is probably another stupid and
broken system, but it is what we sure have a lot of. :-)
The biggest challenge of the 21st century is the irony of technologies of
abundance in the hands of those thinking in terms of scarcity.