Discussion:
[fonc] goals
Steve Dekorte
2010-07-08 08:34:23 UTC
Permalink
What do the folks here see as the goals of "new computing"?
Is it to find ways to use technology to help people be more productive?
Is it more about education? Is it about maximizing MIPS/Watt? Something else entirely?

My impression (which may be wrong) is that most of we think of in retrospect as the really great stuff (PARC, Sutherland and Doug Engelbart's group) was born from environments with goals of increasing productivity of real labor.
BGB
2010-07-08 09:52:37 UTC
Permalink
(pardon the top-post)

granted, I probably don't speak for others here, who may have differing
opinions, I just speak for myself...

I am not formally involved with the project in question here, but work on
some of my own stuff in a similar domain (VM and compiler technology).


well, that is the question sometimes...
but, anyways, being useful is the real eventual goal in anything.

otherwise, what does it amount to besides mental masturbation and people
congratulating their own ego / "intellect" (I have seen this before, mostly
in math and physics circles, like lacking any real value in what they are
doing, they praise themselves over how "intelligent" or "insightful" they
are vs everyone else... IMO, this is distasteful and serves no real
purpose...).

granted, one congratulating their own efforts isn't much better...
essentially, it is like one bowing before an idol made by their own hands.
(it is at least meaningful if one can do something and admit openly that it
is all a big pile of crap...).


now, as for useful to who?... maybe that is another part of the question.

maybe if my stuff is at least useful to myself, that is a starting point,
but even this is a difficult matter sometimes. if something can be useful to
others, this is better, or generally improving matters as a whole, that is
better still.


personally though, I see little need to "reinvent" the world, more just a
matter of fixing all these little problems that keep popping up, and maybe
adding a few more tools to the toolbox.

it is notable how much even seemingly trivial matters, like having a tool to
automatically write ones' C & C++ headers for them, ... can make to the
overall programming experience. like, before, there is this seemingly
endless annoyance of having to add prototypes for any new functions into
their headers, and a simple tool (of maybe < 500 loc), can cause this matter
to almost entirely disappear.


some big elaborate "solution" may really do little against these problems,
as what we have is not so much monumental problems, so much as they are
monuments of pebbles. some grand scheme will not necessarily move such a
mountain, but something as simple as a shovel might just do the job.

and with some amount of shoveling, one may just end up moving a mountain...


much like the annoyance of how people say things are "impossible", when
really, they are far from being impossible, but maybe they are a little bit
of effort.

it is like, doing dynamic compilation (like, eval and similar) in C, or
adding many reflection type features.
there is no "magic algorithm" to make this work, but an "ugly mess of code"
pulls it all off fairly well.
likewise goes for more established technologies, like GC, dynamic types, and
lexical closures.

as can be said, "just do it...".


or, at least, this is just my opinion on the matter...

others may feel free to disagree or offer alternate opinions...



----- Original Message -----
From: "Steve Dekorte" <***@dekorte.com>
To: "Fundamentals of New Computing" <***@vpri.org>
Sent: Thursday, July 08, 2010 1:34 AM
Subject: [fonc] goals



What do the folks here see as the goals of "new computing"?
Is it to find ways to use technology to help people be more productive?
Is it more about education? Is it about maximizing MIPS/Watt? Something else
entirely?

My impression (which may be wrong) is that most of we think of in retrospect
as the really great stuff (PARC, Sutherland and Doug Engelbart's group) was
born from environments with goals of increasing productivity of real labor.
Ryan Mitchley
2010-07-08 10:35:52 UTC
Permalink
I would imagine that the goals align with the task of "augmenting human
intellect", to borrow Engelbart's phrase.

The STEPS project, in particular, seems concerned with compact
representations that approach the entropies of the systems being
simulated. Computing, to me, anyway, is very closely linked to
simulation. A compact representation is (hopefully) easier understand,
thus making it suitable for educational purposes. However, it should
also be more computationally efficient, as well as enabling greater
productivity.

I think it's also about regaining control of our technology. A modern
computer system is composed of layer upon layer of ad hoc mechanics,
short on architecture and long on details. There are few people who have
a truly good understanding of the complete system from firmware to UI,
including all the details in between, and it's not because the details
are fundamentally complex - they simply involve huge amounts of rote
learning. Something like Linux has grown somewhat organically, without
any of the robustness that organic growth might imply.

Given concerns about security and privacy - not to mention demonstrable
correctness of operation - an easily decomposable, understandable system
is hugely desirable. There should be bonus side effects, such as running
well on lightweight mobile devices.

I hope to see computing systems becoming vehicles for training
intelligent agents that assist human endeavours - by automating menial
tasks, freeing humans to concentrate on more interesting problems, while
also leveraging the abilities that are trivial for computers, but hard
for humans (large scale data processing, correlation and statistical
analysis, particle simulation, etc.). I also hope to see more of the
abilities that have traditionally been described as A.I. entering
mainstream computation (goal-seeking behaviour, probabilistic reasoning).



Disclaimer: http://www.peralex.com/disclaimer.html
John Zabroski
2010-07-08 15:44:05 UTC
Permalink
I personally do not believe technology actually improves lives. Usually, it
is the opposite. Technology creates instant gratification and addiction to
it thereof, and the primary reason we are so addicted to technology is
because we have become so empty inside.

For me, new computing is about putting yourself directly in the pathway of
the consequences of your actions. Do not invent technology if you are
unwilling to do this. Otherwise, you will ultimately influence, but never
produce, anything worth getting truly excited about. You'll just end up
making society more empty than it already is.

Cheers,
Z-Bo
Alan Kay
2010-07-08 16:10:04 UTC
Permalink
Thoreau said "We become the tools of our tools"; McLuhan: "We become what we
behold".

Both are scary, but the latter one has some hope in it, if we could make
something that by beholding it we would become better.

And technology literally means "anything that humans make" so ideas count here
also as part of the double edged sword collection.

The tradeoffs are both exquisite and excruciating with regard to the invention
and playing of musical instruments .....

Cheers,

Alan




________________________________
From: John Zabroski <***@gmail.com>
To: Fundamentals of New Computing <***@vpri.org>
Sent: Thu, July 8, 2010 8:44:05 AM
Subject: Re: [fonc] goals

I personally do not believe technology actually improves lives. Usually, it is
the opposite. Technology creates instant gratification and addiction to it
thereof, and the primary reason we are so addicted to technology is because we
have become so empty inside.

For me, new computing is about putting yourself directly in the pathway of the
consequences of your actions. Do not invent technology if you are unwilling to
do this. Otherwise, you will ultimately influence, but never produce, anything
worth getting truly excited about. You'll just end up making society more empty
than it already is.

Cheers,
Z-Bo
Ryan Mitchley
2010-07-08 16:40:45 UTC
Permalink
McLuhan: "We become what we behold".
"We don't see things as they are, we see things as we are." - Anais Nin
(just to add some recursive futility to the mix)





Disclaimer: http://www.peralex.com/disclaimer.html
Kim Rose
2010-07-08 16:42:23 UTC
Permalink
Not to put down Anais Nin, but this saying is written in the Talmud
and attributed to Buddha.... ("great minds"????)

Kim
Post by Ryan Mitchley
McLuhan: "We become what we behold".
"We don't see things as they are, we see things as we are." - Anais Nin
(just to add some recursive futility to the mix)
Disclaimer: http://www.peralex.com/disclaimer.html
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Alan Kay
2010-07-08 17:05:42 UTC
Permalink
Actually, Nin got her quote from the Talmud ....





________________________________
From: Ryan Mitchley <***@peralex.com>
To: Fundamentals of New Computing <***@vpri.org>
Sent: Thu, July 8, 2010 9:40:45 AM
Subject: Re: [fonc] goals

Alan Kay wrote:
McLuhan: "We become what we behold".
"We don't see things as they are, we see things as we are." - Anais Nin
(just to add some recursive futility to the mix)







________________________________
Disclaimer: http://www.peralex.com/disclaimer.html
David Leibs
2010-07-08 16:43:28 UTC
Permalink
Shop Class as Soulcraft: An Inquiry Into the Value of Work by Matthew
Crawford
http://www.amazon.com/Shop-Class-Soulcraft-Inquiry-Value/dp/1594202230

Has some interesting things to say that I believe adds to this
discussion. Crawford makes a case that many of us that do "modern
knowledge work" wind up being part of a process where we use less and
less real thinking and judgement over time. We become spectators and
are not fully engaged. He makes a case that working with our hands in
the trades can require much greater cognitive engagement and is hence
more fulfilling. My description just scratches the surface of what he
is trying to convey which is much deeper.

cheers,
-David Leibs
Post by Alan Kay
Thoreau said "We become the tools of our tools"; McLuhan: "We become
what we behold".
Both are scary, but the latter one has some hope in it, if we could
make something that by beholding it we would become better.
And technology literally means "anything that humans make" so ideas
count here also as part of the double edged sword collection.
The tradeoffs are both exquisite and excruciating with regard to the
invention and playing of musical instruments .....
Cheers,
Alan
Julian Leviston
2010-07-08 16:11:34 UTC
Permalink
I personally do not believe technology actually improves lives. Usually, it is the opposite. Technology creates instant gratification and addiction to it thereof, and the primary reason we are so addicted to technology is because we have become so empty inside.
For me, new computing is about putting yourself directly in the pathway of the consequences of your actions. Do not invent technology if you are unwilling to do this. Otherwise, you will ultimately influence, but never produce, anything worth getting truly excited about. You'll just end up making society more empty than it already is.
Cheers,
Z-Bo
One of the primarily useful things that technology does for me is the dissemination of knowledge. Without this dissemination, I'd find it very hard to get the information I enjoy using and consuming regularly.

Let me tell you a story to illustrate this... The other day I wanted to cook a roast lamb. I'd never done it before. This was about six months ago. I found the knowledge I needed very quickly on the internet. I think it took roughly ten minutes to find out how to cook a lamb roast by cross-correlating various recipes that i found.

I've been cooking for quite a long time, but I've also been doing other things for quite a long time and I've become rather good at a few of them. One of the first of them was a martial art I started when I was about 16 years old. Now, because I UNDERSTAND a lot of this martial art to its core, I can abstract that understanding into knowledge of how to be and how to do various things well. If you like, I can take the skill of being that I know in that martial art, and apply it to something else... say, cooking.

Without the internet and iPhone and wireless router and computers and various other pieces of technology (stove, electricity, lights for example) at my disposal, I'd find it very very hard to do the same roast. My dad is a little bit of a connoisseur of roast lamb, and he's eaten in many restaurants. He told me last week that I make the best roast lamb that he's ever eaten.

So I put it to you that knowledge and information dissemination through technology is an incredibly powerful and useful thing. I'd dare to say it improves lives. It has improved my life immeasurably.

However... (and this is a big however) your post made me quite agitated for a few minutes, andI think that this was mostly because people do not take the time to learn (at least) one skill very very well and this is a point I think you're trying to make here... the addiction to information technology can happen at a young age. I was only allowed two or three hours of computer usage per week until I was 14 years old. This meant that I maximised my computing use in the time I had to use. I'm incredibly proficient at using computers, but I do so from a wealth of knowledge of being. I don't get lost in computers. I rather use them to focus and propel my efforts.

I do tend to think that perhaps this part of my story isn't all that common these days, irrespective of whether we're talking about computers or not.

But I also think that technology is inescapably important in terms of the improvements to accelerating quantities and improving efficiencies that are possible when it comes to learning things; especially when it comes to minimising frustration. I know that frustration is a useful quality every now and then especially when it comes to growing in some skill, but I don't see why we need hamper ourselves from achieving whatever it is we'd like to achieve.

For example, I'd really like to learn to speak and understand at least twenty human languages. My plan is to build systems that allow the accelerated learning of these languages. Perhaps I'm insane. I hope not. I know what I want to achieve is difficult, but I also can see a path towards it, so hopefully it's achievable. In the process, I also hope to build a system which allows its own transcendence in terms of facilitating all kinds of learning, because I really hope that our current methods of learning things aren't "all we have"... they totally suck.

Thoughtfully,
Julian.
BGB
2010-07-08 17:05:35 UTC
Permalink
----- Original Message -----
From: "Julian Leviston" <***@leviston.net>
To: "Fundamentals of New Computing" <***@vpri.org>
Sent: Thursday, July 08, 2010 9:11 AM
Subject: Re: [fonc] goals
Post by John Zabroski
I personally do not believe technology actually improves lives. Usually,
it is the opposite. Technology creates instant gratification and
addiction to it thereof, and the primary reason we are so addicted to
technology is because we have become so empty inside.
For me, new computing is about putting yourself directly in the pathway of
the consequences of your actions. Do not invent technology if you are
unwilling to do this. Otherwise, you will ultimately influence, but never
produce, anything worth getting truly excited about. You'll just end up
making society more empty than it already is.
Cheers,
Z-Bo
<--
One of the primarily useful things that technology does for me is the
dissemination of knowledge. Without this dissemination, I'd find it very
hard to get the information I enjoy using and consuming regularly.

<snip, cooking>

So I put it to you that knowledge and information dissemination through
technology is an incredibly powerful and useful thing. I'd dare to say it
improves lives. It has improved my life immeasurably.
-->

agreed.
something as trivial as being able to look stuff up on Wikipedia, or do Web
searches via Google, does make things a lot easier. computers also make it a
lot easier to externalize a lot of ones' memory, so one need not remember
everything, but can instead re-fetch information as it is needed, and then
know things, or at least until they forget them again.

granted though, maybe there is a cost here, like there is reason one
"should" go and memorize lots of stuff, but it is easier to leave them in an
external form, and usually one is not adversely effected.


<--
However... (and this is a big however) your post made me quite agitated for
a few minutes, andI think that this was mostly because people do not take
the time to learn (at least) one skill very very well and this is a point I
think you're trying to make here... the addiction to information technology
can happen at a young age. I was only allowed two or three hours of computer
usage per week until I was 14 years old. This meant that I maximised my
computing use in the time I had to use. I'm incredibly proficient at using
computers, but I do so from a wealth of knowledge of being. I don't get lost
in computers. I rather use them to focus and propel my efforts.

I do tend to think that perhaps this part of my story isn't all that common
these days, irrespective of whether we're talking about computers or not.
-->

agreed...
I went into a slight bit of a rant on this one.

but, yeah, I have been using computers fairly intensively since I was fairly
young (10 or so...).
if I were not so much into computers, who would I have ended up being?...

maybe I would have been someone other than some computer nerd who never
really gets anything useful done (like learning how to drive or operate
independently, or getting a job, ...), or would maybe find a girlfriend or
similar (yep, no dating in around 8 years now...).

but, alas, I don't know...


maybe all would have been the same, and I would have just pursued some of my
other past interests (like math or science or similar), but then again, I
have seen the egotism and pretension of many of these people, so maybe it is
a good thing that I am a programmer.

then again, I have also noted how terribly poor my math skills are (if
around people who are actually skilled at math), so maybe in such a world I
would have been like "oh well, whatever" and gone and just done something
else.


<--
But I also think that technology is inescapably important in terms of the
improvements to accelerating quantities and improving efficiencies that are
possible when it comes to learning things; especially when it comes to
minimising frustration. I know that frustration is a useful quality every
now and then especially when it comes to growing in some skill, but I don't
see why we need hamper ourselves from achieving whatever it is we'd like to
achieve.
-->

agreed...


<--
For example, I'd really like to learn to speak and understand at least
twenty human languages. My plan is to build systems that allow the
accelerated learning of these languages. Perhaps I'm insane. I hope not. I
know what I want to achieve is difficult, but I also can see a path towards
it, so hopefully it's achievable. In the process, I also hope to build a
system which allows its own transcendence in terms of facilitating all kinds
of learning, because I really hope that our current methods of learning
things aren't "all we have"... they totally suck.
-->

dunno. never really been much into languages personally.
BGB
2010-07-08 16:40:23 UTC
Permalink
----- Original Message -----
From: John Zabroski
To: Fundamentals of New Computing
Sent: Thursday, July 08, 2010 8:44 AM
Subject: Re: [fonc] goals


I personally do not believe technology actually improves lives. Usually, it is the opposite. Technology creates instant gratification and addiction to it thereof, and the primary reason we are so addicted to technology is because we have become so empty inside.

For me, new computing is about putting yourself directly in the pathway of the consequences of your actions. Do not invent technology if you are unwilling to do this. Otherwise, you will ultimately influence, but never produce, anything worth getting truly excited about. You'll just end up making society more empty than it already is.


hmm, I would have thought technology would have been more about things like productivity, ... rather than the emotional status of its users.

it is like, one can measure productivity in various ways:
getting products to market;
speed at which tasks can be performed;
the net profits of a company;
...

subjective concerns are much more difficult to measure, and typically much more fluid in nature, and so would likely normally be assumed not to exist (well, unless you are in marketting or similar, at which point it is about giving society the emotion you want them to have: the desire to buy this new and shiny product...).

really, in some sense, emotions are not too much different than the products being sold, like sell the product, and sell these feelings associated with the product. then make more and sell them later, and the consumers remain happy (and probably the corporate higher-ups as well, assuming profits remain good, ...).

although one can just ignore the matter of whether or not feelings actually exist, and what (if anything) they are, and for the most part one is not really any worse off.
like, if the world was without feeling, but in nearly all other ways the same (assume no dystopic goverments or other such changes), what would be the likely effect?
maybe no one would notice or care, "business as usual", although it might impact interpersonal interactions/...
I don't really know though.


in terms of measures of productivity, technology has likely notably improved society vs in the past.
this can be inferred partly from the fact that technology actually rose to dominance, whereas if it were not useful in this regard, likely it would not have done so.

similarly the lifespan and quality of living tends to be higher in 1st-world nations than in 3rd-world nations, ...

however, morals, ... would seem to be degraded in industrialized nations (note the widespread prevelance of promiscuity, gays, gangs and violence, ...), so this may be a cost associated with industrialization (although there is not any obvious reason why one would lead to the other). this may also be a cost of urbanization though, I don't know.


or such...
chris mills
2010-07-08 18:11:05 UTC
Permalink
Post by BGB
however, morals, ... would seem to be degraded in industrialized nations
(note the widespread prevelance of promiscuity, gays, gangs and violence,
...), so this may be a cost associated with industrialization (although
there is not any obvious reason why one would lead to the other). this may
also be a cost of urbanization though, I don't know.
Whoa there, going off topic a bit I know, but I take offence to the
suggestion that homosexuality is evidence of moral degradation. Indeed as an
unashamed european lefty I would argue that discrimination against gays,
minorities, etc is actually greater evidence of moral degradation and this
is thankfully becoming less common in the industrialised world. And I would
suggest you have a word with a Rwandan, Zimbabwean or Afghan (to name but a
few) about whether gangs and violence are worst in the industrialised
nations or non-industrialised. Or if you wish to argue against urbanisation
I am pretty sure that my own country (england) had some pretty nasty
violence before large scale urbanisation - civil wars, violence of
landowners against those living on their land, persecution of
catholics/protestants (depending on whether catholics or protestants were in
charge at the time), burning folks at the stake for unsubstantiated
accusations of witch craft, etc.
A change in the prevailing moral code does not imply a degradation.
Alex Abate Biral
2010-07-08 18:17:01 UTC
Permalink
People, I really think this isn't the right mailing list for this kind of
discussion.
Post by chris mills
Post by BGB
however, morals, ... would seem to be degraded in industrialized nations
(note the widespread prevelance of promiscuity, gays, gangs and violence,
...), so this may be a cost associated with industrialization (although
there is not any obvious reason why one would lead to the other). this may
also be a cost of urbanization though, I don't know.
Whoa there, going off topic a bit I know, but I take offence to the
suggestion that homosexuality is evidence of moral degradation. Indeed as an
unashamed european lefty I would argue that discrimination against gays,
minorities, etc is actually greater evidence of moral degradation and this
is thankfully becoming less common in the industrialised world. And I would
suggest you have a word with a Rwandan, Zimbabwean or Afghan (to name but a
few) about whether gangs and violence are worst in the industrialised
nations or non-industrialised. Or if you wish to argue against urbanisation
I am pretty sure that my own country (england) had some pretty nasty
violence before large scale urbanisation - civil wars, violence of
landowners against those living on their land, persecution of
catholics/protestants (depending on whether catholics or protestants were in
charge at the time), burning folks at the stake for unsubstantiated
accusations of witch craft, etc.
A change in the prevailing moral code does not imply a degradation.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
chris mills
2010-07-08 18:20:08 UTC
Permalink
Agreed. Apologies folks, it was a knee jerk reaction to a statement I found
offensive.

ChrisM
Post by Alex Abate Biral
People, I really think this isn't the right mailing list for this kind of
discussion.
Post by chris mills
Post by BGB
however, morals, ... would seem to be degraded in industrialized nations
(note the widespread prevelance of promiscuity, gays, gangs and violence,
...), so this may be a cost associated with industrialization (although
there is not any obvious reason why one would lead to the other). this may
also be a cost of urbanization though, I don't know.
Whoa there, going off topic a bit I know, but I take offence to the
suggestion that homosexuality is evidence of moral degradation. Indeed as an
unashamed european lefty I would argue that discrimination against gays,
minorities, etc is actually greater evidence of moral degradation and this
is thankfully becoming less common in the industrialised world. And I would
suggest you have a word with a Rwandan, Zimbabwean or Afghan (to name but a
few) about whether gangs and violence are worst in the industrialised
nations or non-industrialised. Or if you wish to argue against urbanisation
I am pretty sure that my own country (england) had some pretty nasty
violence before large scale urbanisation - civil wars, violence of
landowners against those living on their land, persecution of
catholics/protestants (depending on whether catholics or protestants were in
charge at the time), burning folks at the stake for unsubstantiated
accusations of witch craft, etc.
A change in the prevailing moral code does not imply a degradation.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Alex Abate Biral
2010-07-08 18:53:57 UTC
Permalink
I understand (and I hope the other people in this list do so too), but I
really think that there should be a separate list for arguing about the
project's philosophy (which is as important, if not more, as this list).
Post by chris mills
Agreed. Apologies folks, it was a knee jerk reaction to a statement I found
offensive.
ChrisM
Post by Alex Abate Biral
People, I really think this isn't the right mailing list for this kind of
discussion.
Post by chris mills
Post by BGB
however, morals, ... would seem to be degraded in industrialized nations
(note the widespread prevelance of promiscuity, gays, gangs and violence,
...), so this may be a cost associated with industrialization (although
there is not any obvious reason why one would lead to the other). this may
also be a cost of urbanization though, I don't know.
Whoa there, going off topic a bit I know, but I take offence to the
suggestion that homosexuality is evidence of moral degradation. Indeed as an
unashamed european lefty I would argue that discrimination against gays,
minorities, etc is actually greater evidence of moral degradation and this
is thankfully becoming less common in the industrialised world. And I would
suggest you have a word with a Rwandan, Zimbabwean or Afghan (to name but a
few) about whether gangs and violence are worst in the industrialised
nations or non-industrialised. Or if you wish to argue against urbanisation
I am pretty sure that my own country (england) had some pretty nasty
violence before large scale urbanisation - civil wars, violence of
landowners against those living on their land, persecution of
catholics/protestants (depending on whether catholics or protestants were in
charge at the time), burning folks at the stake for unsubstantiated
accusations of witch craft, etc.
A change in the prevailing moral code does not imply a degradation.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
BGB
2010-07-08 19:27:56 UTC
Permalink
yeah, sorry about this as well.
I wrote something without thinking too much here...


my point was neither to cause offense nor to lead to argument over the matter of values.

instead, this can be taken as "an interpretation" or as "a possibility".

another way it can be taken is that certain things have traditionally been held to be good, and others bad.
which are which and under which conditions has been well documented historically, although people at times have disagreed over many of the details (and as to which authorities have higher precedence in these matters, ...). granted, at this point in time there is no real analogue of ISO or ECMA over moral matters, so people are allowed their own opinions and interpretations.

historically at least, there has generally been concensus over most such matters, and I guess if someone really wants they can ague about what are the majority positions / ... at the present time.

but, yeah, arguing here will not accomplish anything, and this was not my intention here.


----- Original Message -----
From: chris mills
To: Fundamentals of New Computing
Sent: Thursday, July 08, 2010 11:20 AM
Subject: Re: [fonc] goals


Agreed. Apologies folks, it was a knee jerk reaction to a statement I found offensive.


On 8 July 2010 19:17, Alex Abate Biral <***@gmail.com> wrote:

People, I really think this isn't the right mailing list for this kind of discussion.


On Thu, Jul 8, 2010 at 3:11 PM, chris mills <***@gmail.com> wrote:




On 8 July 2010 17:40, BGB <***@hotmail.com> wrote:


however, morals, ... would seem to be degraded in industrialized nations (note the widespread prevelance of promiscuity, gays, gangs and violence, ...), so this may be a cost associated with industrialization (although there is not any obvious reason why one would lead to the other). this may also be a cost of urbanization though, I don't know.

Whoa there, going off topic a bit I know, but I take offence to the suggestion that homosexuality is evidence of moral degradation. Indeed as an unashamed european lefty I would argue that discrimination against gays, minorities, etc is actually greater evidence of moral degradation and this is thankfully becoming less common in the industrialised world. And I would suggest you have a word with a Rwandan, Zimbabwean or Afghan (to name but a few) about whether gangs and violence are worst in the industrialised nations or non-industrialised. Or if you wish to argue against urbanisation I am pretty sure that my own country (england) had some pretty nasty violence before large scale urbanisation - civil wars, violence of landowners against those living on their land, persecution of catholics/protestants (depending on whether catholics or protestants were in charge at the time), burning folks at the stake for unsubstantiated accusations of witch craft, etc.
A change in the prevailing moral code does not imply a degradation.
Alan Kay
2010-07-09 01:01:36 UTC
Permalink
Once a project gets going it usually winds up with a few more goals than those
that got it started -- partly because the individual researchers bring their own
perspectives to the mix.

But the original goals of STEPS were pretty simple and longstanding. They came
from thinking that the size of many large systems in terms of amount of code
written seemed very far out of balance -- by many orders of magnitude -- with
intuitions about their actual "mathematical content". This led to a "science of
the artificial" approach of taking the phenomena of already produced artifacts
in the general area of personal computing, and seeing if very compact "runable
maths" could be invented and built to model very similar behaviors.


If this could be done, then some of the approaches in the new models would
represent better ways to design and program to complex behaviors -- which could
be very illuminating about systems designs and representations -- and some of
these would likely be advances in programming in general.

I think of this as "scientific exploration via coevolving mathematics and
engineering".

Cheers,

Alan
BGB
2010-07-09 02:13:07 UTC
Permalink
much agreed.


pardon my likely inelegant extension:

seemingly, nearly any problem can be abstracted, and a set of more elegant solutions can be devised to long-standing problems.

for example, to abstract over the HW, there was the CPU instruction set;
to abstract over the instruction set, there was the assembler;
to abstract over assembler, there are compilers (in this context, I will define low-level languages and many earlier HLL's);
to abstract further, there are modern HLL's and OOP;
to abstract over HLL's, there may be DSL's and specialized code-generating tools;
...

AFAICT, most modern compilers involve at least 2 major translation stages getting from the HLL to the level of ASM: the input HLL is processed, usually to some compiler specific IL or IR;
this IL or IR is further processed, and the target ASM code is emitted (sometimes an ASM post-optimizer is used as well);
...


in terms of using these systems, each level is (usually) a little nicer and a little cleaner than the ones' which follow after it.

granted, there is a cost:
usually the total cost (in terms of system complexity) is larger with these facilities available (for example, an apps ASM + app ASM code is larger than a binary-coded app by itself, and a compiler is often far from being a small or simple piece of code).

but, in the larger sense, it is a good tradeoff, since a small amount of HLL code is a much smaller price to pay than a mountain of ASM.

but, one need also not forget that a lot of work has gone into all these little things which can be almost trivially overlooked: the many man-years which have gone into these operating systems and compiler technologies which one can easily take for granted.


it is an interesting idea though, namely what could be the most "minimally redundant yet maximally capable" system? like if one can increase the overall abstraction of a system while at the same time reducing its overall complexity (vs what it would have been otherwise), although I have my doubts that this could be done in the general case.

making an observation from data compression:
usually there is a fixed factor between compressed and uncompressed data entropy, and going beyond this factor results in rapidly increasing costs for rapidly diminishing returns, and ultimately hitting a fixed limit (the Shannon Limit).


a similar notion could apply to systems complexity:
the minimally redundant system being only some margain smaller than typical system, and given programmers tend to be relatively self-compressing, it is possible this margain is fairly small (meaning that, very possibly, the more abstracted system will always be more internally complex than the lower-level system).

like, one will always need more logic gates to have a full CPU than to drive an ALU directly via switches and lights.


note:
none of this is meant to degrade tools research, as there is still plenty of room for reasearch, and for seeing what works and what doesn't.

like, I believe in utility, but utility is the goal, not the imperative...

often, the thing which seems useless out the outset may turn out to be valuable, and ones' "innovative" idea may turn out to be useless, so an open mind is needed I think. often, the best strategy may be to try and see, and if it works, it works, and if it doesn't, it doesn't.

and, yes, in my case I am well experienced with people casting the opinion that all of what I am doing is useless, and hell, maybe they are right, but at the moment, it isn't too much of a loss.



side note:
on this front, recently I was beating together an idea-spec for an "unusual" bytecode design (basically, the idea is that opcode-binding is symbolic, rather than using fixed pre-assigned opcode numbers, and having tables define their own structure), ... if anyone cares I could make it available (like putting it online and posting a link or whatever).
(note: the overall structure was largely inspired by IFF, WAD, and MSIL, largely the structure follows fairly straightforwardly from these).

granted, as I haven't tried implementing or using the thing yet, I don't really know if the design "works" (really, it could just be destined for the dustbin of pointless ideas, FWIW...).



or such...


----- Original Message -----
From: Alan Kay
To: Fundamentals of New Computing
Sent: Thursday, July 08, 2010 6:01 PM
Subject: Re: [fonc] goals


Once a project gets going it usually winds up with a few more goals than those that got it started -- partly because the individual researchers bring their own perspectives to the mix.

But the original goals of STEPS were pretty simple and longstanding. They came from thinking that the size of many large systems in terms of amount of code written seemed very far out of balance -- by many orders of magnitude -- with intuitions about their actual "mathematical content". This led to a "science of the artificial" approach of taking the phenomena of already produced artifacts in the general area of personal computing, and seeing if very compact "runable maths" could be invented and built to model very similar behaviors.

If this could be done, then some of the approaches in the new models would represent better ways to design and program to complex behaviors -- which could be very illuminating about systems designs and representations -- and some of these would likely be advances in programming in general.

I think of this as "scientific exploration via coevolving mathematics and engineering".

Cheers,

Alan
Steve Dekorte
2010-07-09 04:21:00 UTC
Permalink
Thanks for the response. That kind of sounds like the goal is fewer lines of code (and presumably less labor) per unit of function (increasing productivity). Is that correct?
Post by BGB
Once a project gets going it usually winds up with a few more goals than those that got it started -- partly because the individual researchers bring their own perspectives to the mix.
But the original goals of STEPS were pretty simple and longstanding. They came from thinking that the size of many large systems in terms of amount of code written seemed very far out of balance -- by many orders of magnitude -- with intuitions about their actual "mathematical content". This led to a "science of the artificial" approach of taking the phenomena of already produced artifacts in the general area of personal computing, and seeing if very compact "runable maths" could be invented and built to model very similar behaviors.
If this could be done, then some of the approaches in the new models would represent better ways to design and program to complex behaviors -- which could be very illuminating about systems designs and representations -- and some of these would likely be advances in programming in general.
I think of this as "scientific exploration via coevolving mathematics and engineering".
Colin Putney
2010-07-09 07:56:49 UTC
Permalink
Post by Steve Dekorte
Thanks for the response. That kind of sounds like the goal is fewer lines of code (and presumably less labor) per unit of function (increasing productivity). Is that correct?
Well, I don't speak for Alan, but I have to think it's a bit more than that. The biggest problem we have in computing is that we're terrible at it. Just the other day I remarked to a colleague that the system we were working on had about 10x too much code for what it did, and he agreed. So yes, less code for the same functionality might (might!) be higher productivity, but it's also a rough measure of quality. Writers seek economy of words, athletes seek economy of motion, we seek economy of code.

Colin
Steve Dekorte
2010-07-09 09:03:54 UTC
Permalink
Post by Colin Putney
Post by Steve Dekorte
Thanks for the response. That kind of sounds like the goal is fewer lines of code (and presumably less labor) per unit of function (increasing productivity). Is that correct?
Well, I don't speak for Alan, but I have to think it's a bit more than that. The biggest problem we have in computing is that we're terrible at it. Just the other day I remarked to a colleague that the system we were working on had about 10x too much code for what it did, and he agreed. So yes, less code for the same functionality might (might!) be higher productivity, but it's also a rough measure of quality. Writers seek economy of words, athletes seek economy of motion, we seek economy of code.
I agree that labor including maintenance (which may include understandability, extendability, etc) is a better end measure but that code length (or maybe keystrokes, etc) can be a reasonable starting point for measuring it.

Btw, this discussion may sound trivial, but consider the question: "Is computing technique X is better than Y?" We need to know by what measure it's "better" for the question to be meaningful. And without a measure that matches one's goals, decisions made on their basis may be counterproductive.
BGB
2010-07-09 16:59:05 UTC
Permalink
yeah.

I guess a lot depends on other factors though.

for example, is a lot of this added code because:
the programmer has little idea what he was doing, and so just wildly
copy-pasted everywhere and made a big mess?...
has lots of code which is actually beneficial, such as doing error checking
and building abstractions.

similarly, is a piece of code smaller because:
the programmer is good at getting work done in less code?
or because the code is essentially a tangled mess of hacks?


these may matter more in many cases than the total code size.

like, if the app is only 10 or 15 kloc, but most of it is composed of
"if(...) goto ..." and ugly use of bit-twiddling and pointer operations
(say, you see things like "((int *)(((void **)p)[i+5]))[16]+*(int *)(*(void
**)q)" pretty much every-other line), and the programmer couldn't be
bothered to use more than 3 or 4 characters for most of their function
names, this is not necessarily a good thing.

a similar app which is maybe 20-25 kloc, but consists of code which may be
understood (and doesn't blow up in ones' face as soon as it is fed bad
data), may be a little better.


but, then again, sometimes an app may benefit from being smaller, for simple
sake that ordinary humans can work on it (without having to go up a steep
learning curve and having to become "experts" on the thing before being able
to do much of anything).

for example, a 350 kloc app may be better, even if it does a little less,
than a 5 or 10 Mloc app, if it means that the programmers can more easily
understand the thing in its entirety.


feature bloat is also a problem. features may be good, but too many features
may not be a benefit to ones' purpose, and sometimes a piece of code which
is smaller and only does a few things well, may be better than a larger
piece of code trying to do everything adequately... (sometimes, we don't
need some unified "one true implementation").


but, a lot comes down to abstraction as well:
a much larger codebase with much better abstractions may still be much
easier to work on than a smaller codebase with few or poorly-chosen
abstractions (for example, if the codebase is relatively well organized, and
its code relatively self-explaining, then one can more easily get things
done without having to keep as much in-mind, but if changing something one
place means they have to fix something somewhere else which breaks as a
result, this is a problem...).


but, there are many factors involved.

or such...


----- Original Message -----
From: "Colin Putney" <***@wiresong.ca>
To: "Fundamentals of New Computing" <***@vpri.org>
Sent: Friday, July 09, 2010 12:56 AM
Subject: Re: [fonc] goals
Post by Steve Dekorte
Thanks for the response. That kind of sounds like the goal is fewer lines
of code (and presumably less labor) per unit of function (increasing
productivity). Is that correct?
Well, I don't speak for Alan, but I have to think it's a bit more than that.
The biggest problem we have in computing is that we're terrible at it. Just
the other day I remarked to a colleague that the system we were working on
had about 10x too much code for what it did, and he agreed. So yes, less
code for the same functionality might (might!) be higher productivity, but
it's also a rough measure of quality. Writers seek economy of words,
athletes seek economy of motion, we seek economy of code.

Colin
David Leibs
2010-07-09 17:33:04 UTC
Permalink
Post by BGB
the programmer has little idea what he was doing, and so just wildly
copy-pasted everywhere and made a big mess?...
has lots of code which is actually beneficial, such as doing error
checking and building abstractions.
the programmer is good at getting work done in less code?
or because the code is essentially a tangled mess of hacks?
It isn't that the programmer has little idea of what he is doing.
Things just take time to be transformed into an optimal form.
There is a good example from the history from math, and physics that
illustrates the point. Maxwells equations originally applied to a set
of eight equations published by Maxwell in 1865. After that the
number of equations escalated to twenty equations in twenty unknowns
as people struggled with the implications. Maxwell wrestled with
recasting the equations in quaternion form. Time passed. It was all
very ugly. Finally In 1884 Oliver Heaviside recast Maxwell's math
from the then cumbersome form to its modern vector calculus notation,
thereby reducing the twenty equations in twenty unknowns down to the
four differential equations in two unknowns that we all love and call
"Maxwells equations". Heaviside invented the modern notation giving us
the tools to make sense of something very profound and useful. Good
work on hard things takes time plus a lot of good people that care.

cheers,
-David Leibs
Alan Kay
2010-07-09 17:44:48 UTC
Permalink
One of my all time favorite metaphors and examples for part of what we are
trying to do in this "T-shirt programming" project.

Cheers,

Alan




________________________________
From: David Leibs <***@oracle.com>
To: Fundamentals of New Computing <***@vpri.org>
Sent: Fri, July 9, 2010 10:33:04 AM
Subject: Re: [fonc] goals
the programmer has little idea what he was doing, and so just wildly copy-pasted
everywhere and made a big mess?...
has lots of code which is actually beneficial, such as doing error checking and
building abstractions.
the programmer is good at getting work done in less code?
or because the code is essentially a tangled mess of hacks?
It isn't that the programmer has little idea of what he is doing. Things just
take time to be transformed into an optimal form.
There is a good example from the history from math, and physics that illustrates
the point. Maxwells equations originally applied to a set of eight equations
published by Maxwell in 1865. After that the number of equations escalated to
twenty equations in twenty unknowns as people struggled with the implications.
Maxwell wrestled with recasting the equations in quaternion form. Time passed.
It was all very ugly. Finally In 1884 Oliver Heaviside recast Maxwell's math
from the then cumbersome form to its modern vector calculus notation, thereby
reducing the twenty equations in twenty unknowns down to the four differential
equations in two unknowns that we all love and call "Maxwells equations".
Heaviside invented the modern notation giving us the tools to make sense of
something very profound and useful. Good work on hard things takes time plus a
lot of good people that care.

cheers,
-David Leibs
John Zabroski
2010-07-09 21:48:27 UTC
Permalink
Just wondering... when did that metaphor get started at VPRI? The first
time I had heart you reference the t-shirt metaphor was October 2009 [1]. I
remember joking about it on Lambda the Ultimate in April of 2009 [2], and my
joke was actually based on a presentation given by the head operations
physicist of the Relativistic Heavy Ion Collider project at BNL, Todd
Satogata [3]. He gave a talk about how you can buy a t-shirt with Maxwell's
Equations on it and it pretty much describes the whole universe. But he
said his goal was to unify the strong force and weak force and come up with
a new t-shirt, and make millions of dollars and become famous. When I heard
that you were interested in Maxwell's Equations for Computer Science, I
immediately made the connection to Todd Satogata's BNL speech (I've heard
him give roughly the same pitch in other speeches many times), so that is
why I made the joke about printing Maxwell's Equations for Computer Science
on a t-shirt.

[1] http://media.cs.uiuc.edu/seminars/StateFarm-Kay-2009-10-22a.asx
[2] http://lambda-the-ultimate.org/node/3265#comment-48129
[3] http://toddsatogata.net/
Post by Alan Kay
One of my all time favorite metaphors and examples for part of what we are
trying to do in this "T-shirt programming" project.
Cheers,
Alan
------------------------------
*Sent:* Fri, July 9, 2010 10:33:04 AM
*Subject:* Re: [fonc] goals
the programmer has little idea what he was doing, and so just wildly
copy-pasted everywhere and made a big mess?...
has lots of code which is actually beneficial, such as doing error checking
and building abstractions.
the programmer is good at getting work done in less code?
or because the code is essentially a tangled mess of hacks?
It isn't that the programmer has little idea of what he is doing. Things
just take time to be transformed into an optimal form.
There is a good example from the history from math, and physics that
illustrates the point. Maxwells equations originally applied to a set of
eight equations published by Maxwell in 1865. After that the number of
equations escalated to twenty equations in twenty unknowns as people
struggled with the implications. Maxwell wrestled with recasting the
equations in quaternion form. Time passed. It was all very ugly. Finally
In 1884 Oliver Heaviside recast Maxwell's math from the then cumbersome form
to its modern v <http://en.wikipedia.org/wiki/Vector_%28geometric%29>ector calculus
notation, thereby reducing the twenty equations in twenty unknowns down to
the four differential equations in two unknowns that we all love and call
"Maxwells equations". Heaviside invented the modern notation giving us the
tools to make sense of something very profound and useful. Good work on
hard things takes time plus a lot of good people that care.
cheers,
-David Leibs
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Alan Kay
2010-07-09 23:24:32 UTC
Permalink
The metaphor happened to me in grad school in the 60s when I finally took the
trouble to trace McCarthy's Lisp in itself and realized just how powerful and
comprehensive he had made it in such a compact way. It was not so much the
Turing aspect but the "slope" of the power "from nothing". I said to myself
"this is the Maxwell's Equations of computing". I think I recounted this in the
"Early History of Smalltalk".

And as a science and math major as an undergraduate, I knew the story that David
told about how Heaviside had collapsed the difficult to understand partial
differential equations form into the vectorized-operatorized t-shirt size we
know. As a post-doc I had some fun working in McCarthy's lab at Stanford and a
hobby was finding much more compact ways to do Lisp (it can really be shrunk
down from John's version) .... and amounts really to being able to say what it
means to send a message from one context to another ....

Cheers,

Alan




________________________________
From: John Zabroski <***@gmail.com>
To: Fundamentals of New Computing <***@vpri.org>
Sent: Fri, July 9, 2010 2:48:27 PM
Subject: Re: [fonc] goals

Just wondering... when did that metaphor get started at VPRI? The first time I
had heart you reference the t-shirt metaphor was October 2009 [1]. I remember
joking about it on Lambda the Ultimate in April of 2009 [2], and my joke was
actually based on a presentation given by the head operations physicist of the
Relativistic Heavy Ion Collider project at BNL, Todd Satogata [3]. He gave a
talk about how you can buy a t-shirt with Maxwell's Equations on it and it
pretty much describes the whole universe. But he said his goal was to unify the
strong force and weak force and come up with a new t-shirt, and make millions of
dollars and become famous. When I heard that you were interested in Maxwell's
Equations for Computer Science, I immediately made the connection to Todd
Satogata's BNL speech (I've heard him give roughly the same pitch in other
speeches many times), so that is why I made the joke about printing Maxwell's
Equations for Computer Science on a t-shirt.

[1] http://media.cs.uiuc.edu/seminars/StateFarm-Kay-2009-10-22a.asx
[2] http://lambda-the-ultimate.org/node/3265#comment-48129
[3] http://toddsatogata.net/


On Fri, Jul 9, 2010 at 1:44 PM, Alan Kay <***@yahoo.com> wrote:

One of my all time favorite metaphors and examples for part of what we are
trying to do in this "T-shirt programming" project.
Post by John Zabroski
Cheers,
Alan
________________________________
Post by John Zabroski
Sent: Fri, July 9, 2010 10:33:04 AM
Subject: Re: [fonc] goals
the programmer has little idea what he was doing, and so just wildly copy-pasted
everywhere and made a big mess?...
has lots of code which is actually beneficial, such as doing error checking and
building abstractions.
the programmer is good at getting work done in less code?
or because the code is essentially a tangled mess of hacks?
It isn't that the programmer has little idea of what he is doing. Things just
take time to be transformed into an optimal form.
There is a good example from the history from math, and physics that illustrates
the point. Maxwells equations originally applied to a set of eight equations
published by Maxwell in 1865. After that the number of equations escalated to
twenty equations in twenty unknowns as people struggled with the implications.
Maxwell wrestled with recasting the equations in quaternion form. Time passed.
It was all very ugly. Finally In 1884 Oliver Heaviside recast Maxwell's math
from the then cumbersome form to its modern vector calculus notation, thereby
reducing the twenty equations in twenty unknowns down to the four differential
equations in two unknowns that we all love and call "Maxwells equations".
Heaviside invented the modern notation giving us the tools to make sense of
something very profound and useful. Good work on hard things takes time plus a
lot of good people that care.

cheers,
-David Leibs
Post by John Zabroski
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
John Zabroski
2010-07-09 23:39:02 UTC
Permalink
I know that part. I meant extending the metaphor to the joke about printing
t-shirts. David mentioned collapsing this, and I mentioned how I brought up
the joke on LtU about doing this for computer science. I've read your
Maxwell's Equations for Computer Science before, such as the NSF 2007
report, but never read the t-shirt joke before the talk I linked below. I
first heard this joke from Todd Satogata, who wants to keep "Maxwell's
Equations" as simple as a cool t-shirt, but increase its expressive power by
unifying the strong and weak forces -- it just shows even physicists don't
give up shrinking the size of the core theory that defines their field.

Have a good weekend!
Z-Bo
Post by Alan Kay
The metaphor happened to me in grad school in the 60s when I finally took
the trouble to trace McCarthy's Lisp in itself and realized just how
powerful and comprehensive he had made it in such a compact way. It was not
so much the Turing aspect but the "slope" of the power "from nothing". I
said to myself "this is the Maxwell's Equations of computing". I think I
recounted this in the "Early History of Smalltalk".
And as a science and math major as an undergraduate, I knew the story that
David told about how Heaviside had collapsed the difficult to understand
partial differential equations form into the vectorized-operatorized t-shirt
size we know. As a post-doc I had some fun working in McCarthy's lab at
Stanford and a hobby was finding much more compact ways to do Lisp (it can
really be shrunk down from John's version) .... and amounts really to being
able to say what it means to send a message from one context to another ....
Cheers,
Alan
------------------------------
*Sent:* Fri, July 9, 2010 2:48:27 PM
*Subject:* Re: [fonc] goals
Just wondering... when did that metaphor get started at VPRI? The first
time I had heart you reference the t-shirt metaphor was October 2009 [1]. I
remember joking about it on Lambda the Ultimate in April of 2009 [2], and my
joke was actually based on a presentation given by the head operations
physicist of the Relativistic Heavy Ion Collider project at BNL, Todd
Satogata [3]. He gave a talk about how you can buy a t-shirt with Maxwell's
Equations on it and it pretty much describes the whole universe. But he
said his goal was to unify the strong force and weak force and come up with
a new t-shirt, and make millions of dollars and become famous. When I heard
that you were interested in Maxwell's Equations for Computer Science, I
immediately made the connection to Todd Satogata's BNL speech (I've heard
him give roughly the same pitch in other speeches many times), so that is
why I made the joke about printing Maxwell's Equations for Computer Science
on a t-shirt.
[1] http://media.cs.uiuc.edu/seminars/StateFarm-Kay-2009-10-22a.asx
[2] http://lambda-the-ultimate.org/node/3265#comment-48129
[3] http://toddsatogata.net/
Post by Alan Kay
One of my all time favorite metaphors and examples for part of what we are
trying to do in this "T-shirt programming" project.
Cheers,
Alan
------------------------------
*Sent:* Fri, July 9, 2010 10:33:04 AM
*Subject:* Re: [fonc] goals
the programmer has little idea what he was doing, and so just wildly
copy-pasted everywhere and made a big mess?...
has lots of code which is actually beneficial, such as doing error
checking and building abstractions.
the programmer is good at getting work done in less code?
or because the code is essentially a tangled mess of hacks?
It isn't that the programmer has little idea of what he is doing. Things
just take time to be transformed into an optimal form.
There is a good example from the history from math, and physics that
illustrates the point. Maxwells equations originally applied to a set of
eight equations published by Maxwell in 1865. After that the number of
equations escalated to twenty equations in twenty unknowns as people
struggled with the implications. Maxwell wrestled with recasting the
equations in quaternion form. Time passed. It was all very ugly. Finally
In 1884 Oliver Heaviside recast Maxwell's math from the then cumbersome form
to its modern v <http://en.wikipedia.org/wiki/Vector_%28geometric%29>ector calculus
notation, thereby reducing the twenty equations in twenty unknowns down to
the four differential equations in two unknowns that we all love and call
"Maxwells equations". Heaviside invented the modern notation giving us the
tools to make sense of something very profound and useful. Good work on
hard things takes time plus a lot of good people that care.
cheers,
-David Leibs
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Hans-Martin Mosner
2010-07-10 07:25:04 UTC
Permalink
... a hobby was finding much more compact ways to do Lisp (it can
really be shrunk down from John's version) .... and amounts really to
being able to say what it means to send a message from one context to
another ....
For quite some time I've been pondering the duality of the
class/instance and method/context relations. In some sense, a context is
an object created by instantiating its method, much like a normal object
is instantiated from its class. Simula's classes and methods looked very
much like this. I think that this might be a direction in which
simplification can be found. In some spare time I've tried to realize
this as a language "Nest" which looks very much like Newspeak (implicit
receivers, classes as modules) but emphasizes on being able to
arbitrarily nest classes and methods. It's not really fleshed out yet (I
have a compiler and emulator in Squeak+OMeta) and I already see some
shortcomings. I have to pick it up and work on it again to get it into a
state where people could look at it...

Cheers,
Hans-Martin
Steve Dekorte
2010-07-10 10:22:29 UTC
Permalink
For quite some time I've been pondering the duality of the class/instance and method/context relations. In some sense, a context is an object created by instantiating its method, much like a normal object is instantiated from its class...
Self does just that:

http://labs.oracle.com/self/language.html

Io (following Self's example) does as well. In this recent video:

http://www.infoq.com/interviews/johnson-armstrong-oop

Ralph Johnson talks about how long it takes for computing culture to absorb new ideas (in his example, things like OO, garbage collection and dynamic message passing) despite them being obvious next steps in retrospect. I think prototypes could also be an example of this.

It seems as if each computing culture fails to establish a measure for it's own goals which leaves it with no means of critically analyzing it's assumptions resulting in the technical equivalent of religious dogma. From this perspective, new technical cultures are more like religious reform movements than new scientific theories which are measured by agreement with experiment. e.g. had the Smalltalk community said "if it can reduce the overall code >X without a performance cost >Y" it's better, perhaps prototypes would have been adopted long ago.

- Steve
Brian Rice
2010-07-10 16:14:39 UTC
Permalink
Or multi-methods / multiple dispatch?
Post by Steve Dekorte
It seems as if each computing culture fails to establish a measure for it's
own goals which leaves it with no means of critically analyzing it's
assumptions resulting in the technical equivalent of religious dogma. From
this perspective, new technical cultures are more like religious reform
movements than new scientific theories which are measured by agreement with
experiment. e.g. had the Smalltalk community said "if it can reduce the
overall code >X without a performance cost >Y" it's better, perhaps
prototypes would have been adopted long ago.
--
-Brian T. Rice
BGB
2010-07-10 17:49:04 UTC
Permalink
yeah.

at least in my experience, prototype systems tend to also lead to much
simpler implementations than class/instance systems as well (I have
implemented both types of system).

originally, and for a fairly long time, I had been using prototype systems
(with a design essentially based on the self-spec, rather than the
JavaScript variant which seemed to me to be a degraded form).

then, at one point, I ended up implementing an object system with its API
largely being derived from the JNI spec (at the time I was trying for a Java
implementation, it made sense to emulate JNI and JVMTI and similar...). the
internals of this thing were much larger and much more complex (although the
external API is the most complex part, as a lot of it is a big complex
external API wrapping somewhat simpler internals, with most of the API calls
simply re-calling to other functions).

actually, the C/I system is also capable of handling P-OO as well, but it is
more awkward and has a higher overhead than the other system (and uses the
C/I system's object type, rather than the P-OO system's object type).

I guess some of this is my punishment for emulating Sun's API design, them
wanting to special-case pretty much any imaginable API call, grr... (to see
this special-casing in action, one can note Java's API's, which have around
30 or so classes dedicated to file IO, ...). (the .NET framework gets by
well enough with only a fraction as many classes for most of this stuff).


technically, both systems share some amount of the underlying machinery, but
are not strictly compatible (the P-OO objects don't work with the C/I API,
but the C/I objects can be used from the P-OO API). there is, however, a
partial split of the C/I API which can also use the P-OO system's objects
(while having more of the "look and feel" of the C/I API).

both systems also use a big hash-table to resolve requests, although with
C/I objects, this is not used in the simple single-inheritence case, but
hash-lookups are used for interfaces and to emulate multiple-inheritence
(although the system can't fully emulate C++ MI semantics, nor those of
other MI languages).


I have had some thoughts before about the possibility of "unifying" the
systems, but have not done so as of yet (having a single "object" type which
works fairly well for both, and with a consistent API and semantics). this
would likely also imply some amount of API simplification, and maybe
dropping some features which were never really used (such as MI and
"structs"). the combined API should address both static and
dynamically-typed usage (the C/I system favors static types, and the P-OO
system uses dynamic types).

however, I don't know if/when I would do any of this (not a terribly high
priority at the moment...).


they also compete with another type of "object" system (also in use in my
case):
using C structs, and then using reflection facilities to work with them.
(although this strategy doesn't have a clean API as of yet).

this strategy is mostly used when working with C code though, and when
interfacing with C.


but, yeah, it is all a bit of a mess...


----- Original Message -----
From: "Steve Dekorte" <***@dekorte.com>
To: "Fundamentals of New Computing" <***@vpri.org>
Sent: Saturday, July 10, 2010 3:22 AM
Subject: Re: [fonc] goals
For quite some time I've been pondering the duality of the class/instance
and method/context relations. In some sense, a context is an object
created by instantiating its method, much like a normal object is
instantiated from its class...
Self does just that:

http://labs.oracle.com/self/language.html

Io (following Self's example) does as well. In this recent video:

http://www.infoq.com/interviews/johnson-armstrong-oop

Ralph Johnson talks about how long it takes for computing culture to absorb
new ideas (in his example, things like OO, garbage collection and dynamic
message passing) despite them being obvious next steps in retrospect. I
think prototypes could also be an example of this.

It seems as if each computing culture fails to establish a measure for it's
own goals which leaves it with no means of critically analyzing it's
assumptions resulting in the technical equivalent of religious dogma. From
this perspective, new technical cultures are more like religious reform
movements than new scientific theories which are measured by agreement with
experiment. e.g. had the Smalltalk community said "if it can reduce the
overall code >X without a performance cost >Y" it's better, perhaps
prototypes would have been adopted long ago.

- Steve
Jecel Assumpcao Jr.
2010-07-11 21:21:10 UTC
Permalink
Steve Dekorte wrote on Sat, 10 Jul 2010 03:22:29 -0700
Post by Steve Dekorte
For quite some time I've been pondering the duality of the class/instance and
method/context relations. In some sense, a context is an object created by
instantiating its method, much like a normal object is instantiated from its class...
http://labs.oracle.com/self/language.html
http://www.infoq.com/interviews/johnson-armstrong-oop
Indeed, but these two languages are, perhaps, even better examples:

http://www.daimi.au.dk/~beta/

http://www.erights.org/elang/index.html
Post by Steve Dekorte
Ralph Johnson talks about how long it takes for computing culture to absorb new
ideas (in his example, things like OO, garbage collection and dynamic message
passing) despite them being obvious next steps in retrospect. I think prototypes
could also be an example of this.
It seems as if each computing culture fails to establish a measure for it's own goals
which leaves it with no means of critically analyzing it's assumptions resulting in the
technical equivalent of religious dogma.
Steve Dekorte
2010-07-11 22:33:37 UTC
Permalink
Post by David Leibs
It isn't
about making it smaller (though I also love that - ColorForth is one of
my favorite systems) but making it understandable so it can be built by
humans in such a way that it can become vast. Like the Internet.
That's a good point. I'd encourage the development of a *measure* of it.
With measures we can iterate. The speed of the feedback loop is only
as tight as our measures. Consider the last 40 years of language/tool
development vs hardware speed (which can be measured) improvements.
Jecel Assumpcao Jr.
2010-07-14 01:32:36 UTC
Permalink
Steve Dekorte wrote on Sun, 11 Jul 2010 15:33:37 -0700
Post by Steve Dekorte
Post by David Leibs
It isn't
about making it smaller (though I also love that - ColorForth is one of
my favorite systems) but making it understandable so it can be built by
humans in such a way that it can become vast. Like the Internet.
That's a good point. I'd encourage the development of a *measure* of it.
With measures we can iterate. The speed of the feedback loop is only
as tight as our measures. Consider the last 40 years of language/tool
development vs hardware speed (which can be measured) improvements.
I don't know how to measure scalability except by growing something
until it breaks. One complication is that is things grow, they tend to
become more valuable and so more resources a poured into evolving it so
it can continue to grow. At some point you get diminishing returns, but
such things usually go way further than you would have initially
guessed.

Of course, none of this helps if you want to decide which of A and B is
the more scalable option. Obviously the thing is to try to identify the
most critical bottleneck. For high performance computing, for example,
there is the "roof line" model which takes into account memory bandwidth
and floating point performance under different operating conditions:

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.156.756&rep=rep
1&type=pdf

The lack of scalability that I was talking about is where a system
becomes too much for any person to understand. Though lines of code is
very simplistic, Alan has compared code sizes of various projects with
different kinds of books in a few of his talks. There are texts that
almost anybody can read and there are others that nobody ever will.
Certainly you can't understand something you have not read. So that is
one bottleneck. I bet there are others - even in a system that is
reasonably short there might be too many combinations of elements to
understand:

http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html

-- Jecel
Julian Leviston
2010-07-14 02:33:23 UTC
Permalink
Post by Jecel Assumpcao Jr.
The lack of scalability that I was talking about is where a system
becomes too much for any person to understand. Though lines of code is
very simplistic, Alan has compared code sizes of various projects with
different kinds of books in a few of his talks. There are texts that
almost anybody can read and there are others that nobody ever will.
Certainly you can't understand something you have not read. So that is
one bottleneck. I bet there are others - even in a system that is
reasonably short there might be too many combinations of elements to
http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html
-- Jecel
This is extremely interesting to me, because I could ask you very simply:

Do you understand the basic relationship between planets?
Also... of how animal (including our) bodies inter-relate?
Also... of how biological cells interact with each other?

Do you see what my point is? Perhaps if one introduces scale properly, there is no limit to what one can understand.

Julian.
BGB
2010-07-14 03:55:11 UTC
Permalink
yes.

there is much emphasis on people understanding an entire system, whereas often a programmer does not need to have such comprehensive understanding.

in a large codebase, for example, parts of the project will come into view as one works on them, and when one moves elsewhere they may pass away from memory.

with an abstracted system, pretty much the entire system may be viewed essentially as a black-box, with people looking at the externals but paying no attention to what happens on the inside. a person working within this system may in-turn see the system in terms of so many black-boxes, and at the same time view what happens externally as a sort of black box (or, as "outside of the current scope").

of course, this sort of thinking generally requires a fair level of standardization at the interfaces between these systems, such that what goes on inside the box does not break what is outside and vice versa.


so, often, one may define their APIs and/or data representations before actually having code in place on either side of the boundary (a non-existant app working against a non-existant library).

but, OTOH, this strategy does seem to scale fairly well, as with multi-million line projects, it is unlikely much of anyone will really look over the whole thing.

however, in these cases, it is fairly important that these boxes also be "open" (as in, the internals of a complex system are left where programmers can access them), since a "black monolith" is in turn of limited utility. so, some of this is tradeoffs.


----- Original Message -----
From: Julian Leviston
To: Fundamentals of New Computing
Sent: Tuesday, July 13, 2010 7:33 PM
Subject: Re: [fonc] goals




On 13/07/2010, at 10:29 PM, Jecel Assumpcao Jr. wrote:


The lack of scalability that I was talking about is where a system
becomes too much for any person to understand. Though lines of code is
very simplistic, Alan has compared code sizes of various projects with
different kinds of books in a few of his talks. There are texts that
almost anybody can read and there are others that nobody ever will.
Certainly you can't understand something you have not read. So that is
one bottleneck. I bet there are others - even in a system that is
reasonably short there might be too many combinations of elements to
understand:

http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html

-- Jecel





This is extremely interesting to me, because I could ask you very simply:


Do you understand the basic relationship between planets?
Also... of how animal (including our) bodies inter-relate?
Also... of how biological cells interact with each other?


Do you see what my point is? Perhaps if one introduces scale properly, there is no limit to what one can understand.


Julian.
Julian Leviston
2010-07-14 04:24:52 UTC
Permalink
Post by BGB
yes.
there is much emphasis on people understanding an entire system, whereas often a programmer does not need to have such comprehensive understanding.
in a large codebase, for example, parts of the project will come into view as one works on them, and when one moves elsewhere they may pass away from memory.
with an abstracted system, pretty much the entire system may be viewed essentially as a black-box, with people looking at the externals but paying no attention to what happens on the inside. a person working within this system may in-turn see the system in terms of so many black-boxes, and at the same time view what happens externally as a sort of black box (or, as "outside of the current scope").
of course, this sort of thinking generally requires a fair level of standardization at the interfaces between these systems, such that what goes on inside the box does not break what is outside and vice versa.
so, often, one may define their APIs and/or data representations before actually having code in place on either side of the boundary (a non-existant app working against a non-existant library).
but, OTOH, this strategy does seem to scale fairly well, as with multi-million line projects, it is unlikely much of anyone will really look over the whole thing.
however, in these cases, it is fairly important that these boxes also be "open" (as in, the internals of a complex system are left where programmers can access them), since a "black monolith" is in turn of limited utility. so, some of this is tradeoffs.
Yes, that's exactly what I was saying. I believe it's called Encapsulation :P Having an interface and passing messages is incredibly mis-understood generally amongst programmers, it seems to me. It's impossible to scale well without an interface to objects. In exactly the same way that cells have cellular walls, animal bodies have skins or planets have atmospheres, objects need interfaces...

My interest lies in "tiny objects"... because small amounts of things are usually more understandable for us humans, and because perhaps there is such a thing as a "planck feature" meaning the smallest size one can get to and still call it a feature (or object). For example, if I'm considering a certain scale where oranges are distinct objects from other fruits, then perhaps one or ten oranges interacting with each other is very understandable. Two hundred thousand objects interacting is less understandable for humans. But if you look at the scale of orange juice factories, perhaps 200,000 oranges (relabelled as "raw product quantity") is easy to comprehend because it means something different in that context of scale.

Mental visualisation is inherently useful when it comes to comprehensibility of systems. Better than mental visualisation, is EXPLICIT graphic visualisation. If we could come to a system that inherently understands context of scale and allows us to present things graphically with relation to this, we'd really have something worth some value for the entirety of humanity IMHO.

Put simply, our systems only seem to represent ONE context of scale and don't allow comprehensive illustration of where we're looking on a general (ie they lack an awareness of scale-context).

Hopefully all of this is understandable.

Julian.
K. K. Subramaniam
2010-07-14 04:47:25 UTC
Permalink
Post by BGB
there is much emphasis on people understanding an entire system, whereas
often a programmer does not need to have such comprehensive understanding.
I like the way Dan Ingalls introduced his design principles in the article
quoted earlier:
"The purpose of the Smalltalk project is to provide computer support for the
creative spirit in everyone"

As you point out, most of the daily grind requires only a superficial
understanding of concepts. But creative endeavors involve grand leaps of
thought that require a much greater depth of understanding - one that cuts
across subjects and looks beyond vocabulary.

The current generation of computing is sufficient for the first set of tasks. I
hope FONC is geared for the latter.

BTW, the use of metaphors like deep or grand leap do have a basis in wirings
in our brain. See references to long-distance connectivity in:
http://www.scientificamerican.com/article.cfm?id=idle-minds-intelligence

Subbu
BGB
2010-07-14 06:03:19 UTC
Permalink
----- Original Message -----
From: "K. K. Subramaniam" <***@gmail.com>
To: <***@vpri.org>
Cc: "BGB" <***@hotmail.com>
Sent: Tuesday, July 13, 2010 9:47 PM
Subject: Re: [fonc] goals
Post by K. K. Subramaniam
Post by BGB
there is much emphasis on people understanding an entire system, whereas
often a programmer does not need to have such comprehensive
understanding.
I like the way Dan Ingalls introduced his design principles in the article
"The purpose of the Smalltalk project is to provide computer support for the
creative spirit in everyone"
As you point out, most of the daily grind requires only a superficial
understanding of concepts. But creative endeavors involve grand leaps of
thought that require a much greater depth of understanding - one that cuts
across subjects and looks beyond vocabulary.
The current generation of computing is sufficient for the first set of tasks. I
hope FONC is geared for the latter.
BTW, the use of metaphors like deep or grand leap do have a basis in wirings
http://www.scientificamerican.com/article.cfm?id=idle-minds-intelligence
hmm, thinking here...

not sure how this relates to the article, but seems to me about right.


well, a further distinction can be made:
low volume deep thinking;
vs high-volume superficial thinking.

like, a person skilled in low-volume deep thinking would tend to notice many
subtleties, and maybe make a lot of "deeper" connections, but may easily bog
down in other situations (like, say, they have to mentally keep track of the
behaviors of a large number of people, or respond to lots of questions and
jump between several different conversations, and quickly change between
performing multiple tasks possibly at the same time).

a person more skilled in high-volume thinking will do these types of tasks
easily, but may be easily defeated by a task involving only a small number
of items with non-trivial interactions (or may feel cramped and frustrated
being expected to sit in one place and think about a single task for more
than a short period of time...).


or, in analogy to art:
a person can spend many days working on a single piece of artwork, examining
and adjusting many little details for maximum aesthetic affect;
another person can produce many pieces of artwork in a single day, examining
and adjusting many little details of their process for maximum efficiency
and volume.

even though both are artists, they would have notably different abilities
and skillsets, and neither could likely effectively do the job of the other.

the low-volume artist would likely feel bewildered if expected to produce,
say, 25 images in an 8 hour shift;
the high-volume artist would likely feel lost in trying to make an image
that looked like much more than something out of a comic book...


one might try to draw a person posing by some vines and a lamp-post, putting
much effort into the reflections on the glass and metal, and on the shimmer
of the water on the lake in the background.

the other might try to draw a person right in the middle of being blown
apart by machine-gun fire from a large robot with chunks partly detached and
in the (implied) process of flying off. their idea then of improving the
aesthetics is to start putting sparks on the ground and spent bullet casings
on the ground near the robot, and then they lose direction after approx 30
minutes or so...


something sort of like this could also matter WRT programming language
preferences as well...

then again, it could also be related to psychology as well (not sure, like
where people in different tempermants and using different cognitive
functions would likely fall on these sorts of scales).


or such...
John Zabroski
2010-07-12 01:13:10 UTC
Permalink
Steve,

Something pointed out to me by Microsoft Silverlight -and- Expression Blend
architect John Gossman [1] is that eventually these issues get resolved, but
the process is pretty ugly. He linked this book as a reference point
http://www.amazon.com/Strangest-Man-Quantum-Genius-Farmelo/dp/0571222781
One of Alan's goals is figuring out how we can compress the timespan for
going through this process; read the NSF stuff about "from nothing"
bootstrapping as an example.

[1] John is widely respected inside Redmond, because he is so good at taking
complex formulations of ideas and distilling them down into simple
formalisms.
For quite some time I've been pondering the duality of the class/instance
and method/context relations. In some sense, a context is an object created
by instantiating its method, much like a normal object is instantiated from
its class...
http://labs.oracle.com/self/language.html
http://www.infoq.com/interviews/johnson-armstrong-oop
Ralph Johnson talks about how long it takes for computing culture to absorb
new ideas (in his example, things like OO, garbage collection and dynamic
message passing) despite them being obvious next steps in retrospect. I
think prototypes could also be an example of this.
It seems as if each computing culture fails to establish a measure for it's
own goals which leaves it with no means of critically analyzing it's
assumptions resulting in the technical equivalent of religious dogma. From
this perspective, new technical cultures are more like religious reform
movements than new scientific theories which are measured by agreement with
experiment. e.g. had the Smalltalk community said "if it can reduce the
overall code >X without a performance cost >Y" it's better, perhaps
prototypes would have been adopted long ago.
- Steve
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Max OrHai
2010-07-12 02:20:29 UTC
Permalink
Post by Steve Dekorte
It seems as if each computing culture fails to establish a measure for it's
own goals which leaves it with no means of critically analyzing it's
assumptions resulting in the technical equivalent of religious dogma. From
this perspective, new technical cultures are more like religious reform
movements than new scientific theories which are measured by agreement with
experiment. e.g. had the Smalltalk community said "if it can reduce the
overall code >X without a performance cost >Y" it's better, perhaps
prototypes would have been adopted long ago.
But code size versus performance is only one of many concurrent trade-offs,
when it comes to defining 'better'. Different individuals or groups can have
legitimately different needs. The more people are involved (and the more
invested they are), the more difficult the consensus-building process.
Measurements can help, but they are human artifacts as well, in their own
way. They don't necessarily pull you up out of the muck of the human
political process.

I'd say the issue isn't with computing culture per se, but with culture in
general. There's a big gap between Science as the rational, disinterested
pursuit of knowledge and *any* engaged "technical culture", even of people
as enlightened (and as few) as Smalltalkers.

-- Max
John Zabroski
2010-07-13 23:01:03 UTC
Permalink
Of course, in many ways, code size is not at all related to performance, and
you might have discovered the smallest code size to model a problem, but
that code size has a "bug" in that its performance is mostly a function of
evaluation strategy (e.g., call-by-need performance model is not even
compositional with respect to the lexicographic syntax!). If we separate
meaning from specification, then this is no longer true, but we've increased
complexity in the meta-interpreter. It is not "the muck of the human
political process" we are trying to get out of. Instead, it is the Turing
tarpit we are trying to step out of. Already, many of our systems are like
fossils.

For example, traditional measures of software complexity, such as Cyclomatic
Complexity, are basically metrics on parse trees. You simply count the
appearance of a set of static productions, and you get the rough idea for
the complexity of the software. But these metrics don't apply well to
languages that have cores that make heavy use of partial application. Can
you spot the bug in [1]. Another measure of complexity is the Dependency
Structure Matrix (DSM), which measures dependencies. But these
"dependencies" are based on linking and loading dependencies -- again,
evaluation strategy -- and not problem domain abstraction issues. Actually,
a DSM in a way does show problem domain abstraction issues, but it looks at
them in terms of dependencies. Likewise, Cyclomatic Complexity does show
problem domain abstraction issues, but it looks at them in terms of the
degree to which you are not explicitly modeling the context in which
messages pass to and from objects. Neither is the true thing: The real
measure is simply how well your problem domain is abstracted, which is
subjective and based on requirements in most problems and cannot be
summarized with algebraic equations -- math is only ONE problem domain, and
if we base our non-math systems entirely on functional decomposition then we
will get spaghetti code as a result, since modeling non-math systems as math
systems is a problem domain abstraction error.

[1] http://www.cs.stir.ac.uk/~kjt/techreps/pdf/TR141.pdf FOR FUN: Where is
the bug here? The authors claim they are measuring the *economic*
expressiveness of languages. Show me at least one counter-example and
explain why your counter-example shows this is a cargo cult science (there
are many famous programming language papers about this, and you can feel
free to just point to one of those).
Post by Max OrHai
Post by Steve Dekorte
It seems as if each computing culture fails to establish a measure for
it's own goals which leaves it with no means of critically analyzing it's
assumptions resulting in the technical equivalent of religious dogma. From
this perspective, new technical cultures are more like religious reform
movements than new scientific theories which are measured by agreement with
experiment. e.g. had the Smalltalk community said "if it can reduce the
overall code >X without a performance cost >Y" it's better, perhaps
prototypes would have been adopted long ago.
But code size versus performance is only one of many concurrent trade-offs,
when it comes to defining 'better'. Different individuals or groups can have
legitimately different needs. The more people are involved (and the more
invested they are), the more difficult the consensus-building process.
Measurements can help, but they are human artifacts as well, in their own
way. They don't necessarily pull you up out of the muck of the human
political process.
I'd say the issue isn't with computing culture per se, but with culture in
general. There's a big gap between Science as the rational, disinterested
pursuit of knowledge and *any* engaged "technical culture", even of people
as enlightened (and as few) as Smalltalkers.
-- Max
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Alan Kay
2010-07-14 04:05:11 UTC
Permalink
I think this is the real point here, and you put it well.

One of the books I've admired greatly over the years is "The Molecular Biology
of The Cell" -- especially the "green one" (3rd edition?) -- which does a
terrific job of starting with a few findings of chemistry and spinning out a
pretty good and deep coverage of the subject in about 1000 extremely readable
pages.

This is not quite a good enough analogy for what we are talking about, because
it doesn't have be to constructive.


If we look at elementary geometry books and decide -- subjectively as you say --
that one seems to be the clearest description of how to construct geometry up to
some point, then this is perhaps a much better analogy because we've got at
least three different representation systems working together with the important
indefinable of "style" that brings life to structures.

Hard to measure .... and not everyone agrees that a particular artifact is
worthy of great respect.

On the other hand, we've all had encounters with such creations, and have felt
lifted by them.

Cheers,

Alan




________________________________
From: John Zabroski <***@gmail.com>
To: Fundamentals of New Computing <***@vpri.org>
Sent: Tue, July 13, 2010 4:01:03 PM
Subject: Re: [fonc] goals

Of course, in many ways, code size is not at all related to performance, and you
might have discovered the smallest code size to model a problem, but that code
size has a "bug" in that its performance is mostly a function of evaluation
strategy (e.g., call-by-need performance model is not even compositional with
respect to the lexicographic syntax!). If we separate meaning from
specification, then this is no longer true, but we've increased complexity in
the meta-interpreter. It is not "the muck of the human political process" we
are trying to get out of. Instead, it is the Turing tarpit we are trying to
step out of. Already, many of our systems are like fossils.

For example, traditional measures of software complexity, such as Cyclomatic
Complexity, are basically metrics on parse trees. You simply count the
appearance of a set of static productions, and you get the rough idea for the
complexity of the software. But these metrics don't apply well to languages
that have cores that make heavy use of partial application. Can you spot the
bug in [1]. Another measure of complexity is the Dependency Structure Matrix
(DSM), which measures dependencies. But these "dependencies" are based on
linking and loading dependencies -- again, evaluation strategy -- and not
problem domain abstraction issues. Actually, a DSM in a way does show problem
domain abstraction issues, but it looks at them in terms of dependencies.
Likewise, Cyclomatic Complexity does show problem domain abstraction issues, but
it looks at them in terms of the degree to which you are not explicitly modeling
the context in which messages pass to and from objects. Neither is the true
thing: The real measure is simply how well your problem domain is abstracted,
which is subjective and based on requirements in most problems and cannot be
summarized with algebraic equations -- math is only ONE problem domain, and if
we base our non-math systems entirely on functional decomposition then we will
get spaghetti code as a result, since modeling non-math systems as math systems
is a problem domain abstraction error.

[1] http://www.cs.stir.ac.uk/~kjt/techreps/pdf/TR141.pdf FOR FUN: Where is the
bug here? The authors claim they are measuring the *economic* expressiveness of
languages. Show me at least one counter-example and explain why your
counter-example shows this is a cargo cult science (there are many famous
programming language papers about this, and you can feel free to just point to
one of those).
Post by Steve Dekorte
It seems as if each computing culture fails to establish a measure for it's own
goals which leaves it with no means of critically analyzing it's assumptions
Reuben Thomas
2010-07-14 10:59:34 UTC
Permalink
[1] http://www.cs.stir.ac.uk/~kjt/techreps/pdf/TR141.pdf  FOR FUN: Where is
the bug here?  The authors claim they are measuring the *economic*
expressiveness of languages.
I think I don't really follow you here (you seem in a slightly
whimsical mood), but I just thought I'd point out that the authors
claim no such thing. The word "economic" does not appear in the paper.
"Economically" appears once, and the context is: "Halstead’s claim is
that the higher the mean language level the more powerful the language
is. We prefer instead to say ‘more expressive’ by which we mean that
the same algorithm can be expressed more economically." That is, the
authors are talking about economy, i.e. brevity, of expression, and
not about economics.

If by the reference to cargo cults, you mean a sort of inverse cargo
cult in which the shorter the program, the simpler it's supposed to
be, I say "hear hear!" while distancing myself from the concordant
cheers of the APL-haters.
--
http://rrt.sc3d.org
John Zabroski
2010-07-14 14:56:42 UTC
Permalink
Well, you're right. The way I phrased it isn't at all proper. I meant the
authors were using economy of expression [1] as their metric. In
programming languages lingo, the phrase "more expressive" the authors use is
co-opting the meaning of expressive as defined by Felleisen's expressiveness
framework [2] or Hewitt and Patterson's Comparative Schematology [3] [4]. I
am asking a general question: Why is economy of expression deceitful?

[1] http://c2.com/cgi/wiki?EconomyOfExpression
[2] http://www.ccs.neu.edu/scheme/pubs/scp91-*felleisen*.ps.gz
[3] http://dspace.mit.edu/handle/1721.1/6291
[4] http://dspace.mit.edu/handle/1721.1/5849
Post by John Zabroski
[1] http://www.cs.stir.ac.uk/~kjt/techreps/pdf/TR141.pdf<http://www.cs.stir.ac.uk/%7Ekjt/techreps/pdf/TR141.pdf>
FOR FUN: Where is
the bug here? The authors claim they are measuring the *economic*
expressiveness of languages.
I think I don't really follow you here (you seem in a slightly
whimsical mood), but I just thought I'd point out that the authors
claim no such thing. The word "economic" does not appear in the paper.
"Economically" appears once, and the context is: "Halstead’s claim is
that the higher the mean language level the more powerful the language
is. We prefer instead to say ‘more expressive’ by which we mean that
the same algorithm can be expressed more economically." That is, the
authors are talking about economy, i.e. brevity, of expression, and
not about economics.
If by the reference to cargo cults, you mean a sort of inverse cargo
cult in which the shorter the program, the simpler it's supposed to
be, I say "hear hear!" while distancing myself from the concordant
cheers of the APL-haters.
--
http://rrt.sc3d.org
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Carl Gundel
2010-07-09 18:40:59 UTC
Permalink
David Leibs
Sent: Friday, July 09, 2010 1:33 PM
To: Fundamentals of New Computing
Subject: Re: [fonc] goals
It isn't that the programmer has little idea of what he is doing. Things
just take time to be transformed into an >optimal form.

The programmer must also be given permission to take that time. I can
remember only once in 23 years of software development when my employer
actually told everyone to stop and clean up the code for 2 weeks. No new
features, just make the code better. I was astonished, and very pleased.
This sort of thing should be done as a matter of course.

Alan and Co. seem to me to be trying to make up for decades of our industry
refusing to clean up the code. What is 20 million lines of code can be 20
thousand. If you can apply the 80/20 rule to this in the sense that you can
provide people with a lever that does 80 percent of what you need, and if
the system is tractable for motivated casual coders so that they can provide
the other 20 percent then you have something really valuable.

Early home computers provided some of this quality in spite of their
shortcomings. The machines and languages were simple enough so that anyone
with enough attention span to read a 250 page book could bend the computer
to his will. That's a vision of personal computing. I think this idea is
lost in our popular culture.

-Carl Gundel
Psyche Systems
Max OrHai
2010-07-09 18:47:18 UTC
Permalink
Just to clarify, I'm a bit uncomfortable with "productivity" talk here
because it seems too narrow and ill-defined. Productivity of what
exactly? By whom? For whom? To what end? To a specific manager of a specific
project in a specific development phase, these questions may have specific,
meaningful answers. When it comes to fundamentally rethinking basic tools
and practices, I'm not so sure.

Of course, core values must be somewhat vague, to allow them to mesh with
constantly changing circumstances. Personally, I'd rather strive for
"quality" than "productivity". I'm generally suspicious of premature
quantification: just because you can measure something doesn't make it
meaningful!

It seems to me that, as crufty, haphazard, hidebound, etc. as "software
engineering" is today, "software engineering management" (with its
"productivity metrics" such as "source lines of code per programmer per
day") are even worse. We all know code quality varies wildly between
programmers using the exact same sets of tools. Talent and training
contribute enormously. However, I imagine that everyone on this list can
agree that the tools themselves matter too, even if it's difficult to
quantify that difference precisely.

"Keep it simple" is a widely applicable and successful heuristic. I see this
project as (largely) an experiment in applying that heuristic to the fairly
well-defined category of "creating the personal computing experience", with
a depth and breadth impossible in the productivity/profit-bound world of
commercial software, and a consistency and quality level impossible in the
the traditional open-source project. It's just an experiment, though. It's *
research* (or, if you prefer, "intellectual masturbation"). If we already
knew the outcome, it wouldn't be research, would it?

-- Max
Post by BGB
the programmer has little idea what he was doing, and so just wildly
copy-pasted everywhere and made a big mess?...
has lots of code which is actually beneficial, such as doing error checking
and building abstractions.
the programmer is good at getting work done in less code?
or because the code is essentially a tangled mess of hacks?
It isn't that the programmer has little idea of what he is doing. Things
just take time to be transformed into an optimal form.
There is a good example from the history from math, and physics that
illustrates the point. Maxwells equations originally applied to a set of
eight equations published by Maxwell in 1865. After that the number of
equations escalated to twenty equations in twenty unknowns as people
struggled with the implications. Maxwell wrestled with recasting the
equations in quaternion form. Time passed. It was all very ugly. Finally
In 1884 Oliver Heaviside recast Maxwell's math from the then cumbersome form
to its modern v <http://en.wikipedia.org/wiki/Vector_(geometric)>ector calculus
notation, thereby reducing the twenty equations in twenty unknowns down to
the four differential equations in two unknowns that we all love and call
"Maxwells equations". Heaviside invented the modern notation giving us the
tools to make sense of something very profound and useful. Good work on
hard things takes time plus a lot of good people that care.
cheers,
-David Leibs
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
John Zabroski
2010-07-09 19:16:36 UTC
Permalink
Just to be clear,

The foremost experts and definitive source on software metrics -- Fenton and
Pfleeger [1] -- do not really support SLOC/day/programmer as a good metric
for productivity. It seems to me (from hearing reports by others) that most
people do not actually read books on metrics and instead gravitate towards
the simplest ones, regardless of effectiveness.

Usually SLOC/day/programmer is a good way, though, to convince your boss
that a project predicted to be 300,000 lines of brute force coding cannot be
done in a weekend. The argument being you literally cannot type that fast.

Cheers,
Z-Bo

[1] http://www.amazon.com/Software-Metrics-Norman-E-Fenton/dp/0534956009
Post by Max OrHai
Just to clarify, I'm a bit uncomfortable with "productivity" talk here
because it seems too narrow and ill-defined. Productivity of what
exactly? By whom? For whom? To what end? To a specific manager of a specific
project in a specific development phase, these questions may have specific,
meaningful answers. When it comes to fundamentally rethinking basic tools
and practices, I'm not so sure.
Of course, core values must be somewhat vague, to allow them to mesh with
constantly changing circumstances. Personally, I'd rather strive for
"quality" than "productivity". I'm generally suspicious of premature
quantification: just because you can measure something doesn't make it
meaningful!
It seems to me that, as crufty, haphazard, hidebound, etc. as "software
engineering" is today, "software engineering management" (with its
"productivity metrics" such as "source lines of code per programmer per
day") are even worse. We all know code quality varies wildly between
programmers using the exact same sets of tools. Talent and training
contribute enormously. However, I imagine that everyone on this list can
agree that the tools themselves matter too, even if it's difficult to
quantify that difference precisely.
"Keep it simple" is a widely applicable and successful heuristic. I see
this project as (largely) an experiment in applying that heuristic to the
fairly well-defined category of "creating the personal computing
experience", with a depth and breadth impossible in the
productivity/profit-bound world of commercial software, and a consistency
and quality level impossible in the the traditional open-source project.
It's just an experiment, though. It's *research* (or, if you prefer,
"intellectual masturbation"). If we already knew the outcome, it wouldn't be
research, would it?
-- Max
Post by BGB
the programmer has little idea what he was doing, and so just wildly
copy-pasted everywhere and made a big mess?...
has lots of code which is actually beneficial, such as doing error
checking and building abstractions.
the programmer is good at getting work done in less code?
or because the code is essentially a tangled mess of hacks?
It isn't that the programmer has little idea of what he is doing. Things
just take time to be transformed into an optimal form.
There is a good example from the history from math, and physics that
illustrates the point. Maxwells equations originally applied to a set of
eight equations published by Maxwell in 1865. After that the number of
equations escalated to twenty equations in twenty unknowns as people
struggled with the implications. Maxwell wrestled with recasting the
equations in quaternion form. Time passed. It was all very ugly. Finally
In 1884 Oliver Heaviside recast Maxwell's math from the then cumbersome form
to its modern v <http://en.wikipedia.org/wiki/Vector_%28geometric%29>ector calculus
notation, thereby reducing the twenty equations in twenty unknowns down to
the four differential equations in two unknowns that we all love and call
"Maxwells equations". Heaviside invented the modern notation giving us the
tools to make sense of something very profound and useful. Good work on
hard things takes time plus a lot of good people that care.
cheers,
-David Leibs
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
David Leibs
2010-07-09 19:40:49 UTC
Permalink
I am somewhat dyslexic and I don't always read things in the right
order so I read
SLOC/day/programmer

as
SHLOCK/day/programmer

it fits in a negative metric kinda way. Maybe it is a meme we should
unleash on our overlings.

-djl
Post by John Zabroski
Just to be clear,
The foremost experts and definitive source on software metrics --
Fenton and Pfleeger [1] -- do not really support SLOC/day/programmer
as a good metric for productivity. It seems to me (from hearing
reports by others) that most people do not actually read books on
metrics and instead gravitate towards the simplest ones, regardless
of effectiveness.
Usually SLOC/day/programmer is a good way, though, to convince your
boss that a project predicted to be 300,000 lines of brute force
coding cannot be done in a weekend. The argument being you
literally cannot type that fast.
Cheers,
Z-Bo
[1] http://www.amazon.com/Software-Metrics-Norman-E-Fenton/dp/0534956009
Just to clarify, I'm a bit uncomfortable with "productivity" talk
here because it seems too narrow and ill-defined. Productivity of
what exactly? By whom? For whom? To what end? To a specific manager
of a specific project in a specific development phase, these
questions may have specific, meaningful answers. When it comes to
fundamentally rethinking basic tools and practices, I'm not so sure.
Of course, core values must be somewhat vague, to allow them to mesh
with constantly changing circumstances. Personally, I'd rather
strive for "quality" than "productivity". I'm generally suspicious
of premature quantification: just because you can measure something
doesn't make it meaningful!
It seems to me that, as crufty, haphazard, hidebound, etc. as
"software engineering" is today, "software engineering
management" (with its "productivity metrics" such as "source lines
of code per programmer per day") are even worse. We all know code
quality varies wildly between programmers using the exact same sets
of tools. Talent and training contribute enormously. However, I
imagine that everyone on this list can agree that the tools
themselves matter too, even if it's difficult to quantify that
difference precisely.
"Keep it simple" is a widely applicable and successful heuristic. I
see this project as (largely) an experiment in applying that
heuristic to the fairly well-defined category of "creating the
personal computing experience", with a depth and breadth impossible
in the productivity/profit-bound world of commercial software, and a
consistency and quality level impossible in the the traditional open-
source project. It's just an experiment, though. It's research (or,
if you prefer, "intellectual masturbation"). If we already knew the
outcome, it wouldn't be research, would it?
-- Max
On Fri, Jul 9, 2010 at 10:33 AM, David Leibs
Post by BGB
the programmer has little idea what he was doing, and so just
wildly copy-pasted everywhere and made a big mess?...
has lots of code which is actually beneficial, such as doing error
checking and building abstractions.
the programmer is good at getting work done in less code?
or because the code is essentially a tangled mess of hacks?
It isn't that the programmer has little idea of what he is doing.
Things just take time to be transformed into an optimal form.
There is a good example from the history from math, and physics that
illustrates the point. Maxwells equations originally applied to a
set of eight equations published by Maxwell in 1865. After that the
number of equations escalated to twenty equations in twenty unknowns
as people struggled with the implications. Maxwell wrestled with
recasting the equations in quaternion form. Time passed. It was all
very ugly. Finally In 1884 Oliver Heaviside recast Maxwell's math
from the then cumbersome form to its modern vector calculus
notation, thereby reducing the twenty equations in twenty unknowns
down to the four differential equations in two unknowns that we all
love and call "Maxwells equations". Heaviside invented the modern
notation giving us the tools to make sense of something very
profound and useful. Good work on hard things takes time plus a lot
of good people that care.
cheers,
-David Leibs
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Richard Karpinski
2010-07-10 04:54:35 UTC
Permalink
Max,

You mention "software engineering management" which reminds me of Tom Gilb's
book "Principles of Software Engineering Management" which is still a
favorite of mine despite that he replaced it with his more recent
"Competitive Engineering". He begins any management exercise by defining six
to ten measurable goals, and specifying how the numbers shall be calculated.
These are usually quality goals with specific numbers attached.

If "easily learned" is a chosen goal, it is accompanied by selected training
and testing procedures to find the numbers which measure progress toward
that goal. Inventing ways to meet multiple goals is what he means by the
term engineering. Lines of code is a number all right, but it's not often a
useful number for the stakeholders paying for the effort. Minutes to
complete a specified task after a day of training and practice might more
likely be a useful target.

Perhaps you'd like to have a few of those quality goals. It often seems that
the only real goal for software projects is delivery date since the others
all give way in the effort to meet that one. I hate it when that happens.
"No time to do it well, but always time to do it over" is a frequent
complaint.

Richard
Post by Max OrHai
Just to clarify, I'm a bit uncomfortable with "productivity" talk here
because it seems too narrow and ill-defined. Productivity of what
exactly? By whom? For whom? To what end? To a specific manager of a specific
project in a specific development phase, these questions may have specific,
meaningful answers. When it comes to fundamentally rethinking basic tools
and practices, I'm not so sure.
Of course, core values must be somewhat vague, to allow them to mesh with
constantly changing circumstances. Personally, I'd rather strive for
"quality" than "productivity". I'm generally suspicious of premature
quantification: just because you can measure something doesn't make it
meaningful!
It seems to me that, as crufty, haphazard, hidebound, etc. as "software
engineering" is today, "software engineering management" (with its
"productivity metrics" such as "source lines of code per programmer per
day") are even worse. We all know code quality varies wildly between
programmers using the exact same sets of tools. Talent and training
contribute enormously. However, I imagine that everyone on this list can
agree that the tools themselves matter too, even if it's difficult to
quantify that difference precisely.
"Keep it simple" is a widely applicable and successful heuristic. I see
this project as (largely) an experiment in applying that heuristic to the
fairly well-defined category of "creating the personal computing
experience", with a depth and breadth impossible in the
productivity/profit-bound world of commercial software, and a consistency
and quality level impossible in the the traditional open-source project.
It's just an experiment, though. It's *research* (or, if you prefer,
"intellectual masturbation"). If we already knew the outcome, it wouldn't be
research, would it?
-- Max
Post by BGB
the programmer has little idea what he was doing, and so just wildly
copy-pasted everywhere and made a big mess?...
has lots of code which is actually beneficial, such as doing error
checking and building abstractions.
the programmer is good at getting work done in less code?
or because the code is essentially a tangled mess of hacks?
It isn't that the programmer has little idea of what he is doing. Things
just take time to be transformed into an optimal form.
There is a good example from the history from math, and physics that
illustrates the point. Maxwells equations originally applied to a set of
eight equations published by Maxwell in 1865. After that the number of
equations escalated to twenty equations in twenty unknowns as people
struggled with the implications. Maxwell wrestled with recasting the
equations in quaternion form. Time passed. It was all very ugly. Finally
In 1884 Oliver Heaviside recast Maxwell's math from the then cumbersome form
to its modern v <http://en.wikipedia.org/wiki/Vector_(geometric)>ector calculus
notation, thereby reducing the twenty equations in twenty unknowns down to
the four differential equations in two unknowns that we all love and call
"Maxwells equations". Heaviside invented the modern notation giving us the
tools to make sense of something very profound and useful. Good work on
hard things takes time plus a lot of good people that care.
cheers,
-David Leibs
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
Richard Karpinski, Nitpicker extraordinaire
148 Sequoia Circle,
Santa Rosa, CA 95401
Home: 707-546-6760
http://nitpicker.pbwiki.com/
spir
2010-07-09 07:56:01 UTC
Permalink
On Thu, 8 Jul 2010 18:01:36 -0700 (PDT)
...seeing if very compact "runable maths" could be invented and built to model...
Isn't this a good definition of Lisp?

Denis
________________________________

vit esse estrany ☣

spir.wikidot.com
Steve Dekorte
2010-07-08 22:46:46 UTC
Permalink
It's been said that each generation thinks that it invented sex.
Could the same be said of depression?
I personally do not believe technology actually improves lives. Usually, it is the opposite. Technology creates instant gratification and addiction to it thereof, and the primary reason we are so addicted to technology is because we have become so empty inside.
BGB
2010-07-09 00:00:49 UTC
Permalink
(pardon this thread's continued existence...).

well, each generation has to have come from somewhere...


but, yeah (prior to the big ethics issue), the general point I tried to make
is that it is generally much more a matter of pragmatics (what people can
get from technology or use it to accomplish), rather than peoples' feelings
about technology per-se.

in general, I think I was being positive, since in all we see a lot more
benefit than harm in this world, and probably, as a whole, the world is
better now than it once was.


now, whether a person wants to view things like posessions as a form of
transferrable happiness, or conclude that maybe, happiness doesn't actually
exist in the first place (and is thus likely an irrelevant consideration in
matters of technology), this is up to them...

like, maybe, it really doesn't matter if a person is happy or not?...

either way, they still have their job and any life stuff that needs to be
done.
technology can at least help a person get done those things they want or
need to get done, regardless of whether or not the object has any real
beneficial effect on their mood.


hell, maybe even the net emotional effect is good, as people who otherwise
might be sitting off in the corner somewhere in isolation, being all
depressed and similar due to their lack of friends (and awaiting the day
when their life will end, maybe wondering if "life" even really exists in
the first place, or if it is merely some pattern of emergence due to so many
particle interations simply masquerading as some greater order, ...), can
now find some semblance of a social life posting about how depressed they
are and how dismal and hopeless life is and whatever onto online forums, and
then go and distract themselves by racking up kills in online games (like
Counter Strike or Unreal Tournament or similar...).

so, in this sense, "happiness" flows over wires and is emitted from computer
and TV screens...


or such...


----- Original Message -----
From: "Steve Dekorte" <***@dekorte.com>
To: "Fundamentals of New Computing" <***@vpri.org>
Sent: Thursday, July 08, 2010 3:46 PM
Subject: Re: [fonc] goals



It's been said that each generation thinks that it invented sex.
Could the same be said of depression?
Post by John Zabroski
I personally do not believe technology actually improves lives. Usually,
it is the opposite. Technology creates instant gratification and
addiction to it thereof, and the primary reason we are so addicted to
technology is because we have become so empty inside.
John Zabroski
2010-07-09 17:08:13 UTC
Permalink
Folks,

Relax yourselves. I guess my comments were a combination of vague and
evocative, and launched reactions in many directions. I just was speaking
from personal experience. There is nothing more fulfilling and rewarding
than building technology that improves lives, and computer science has a lot
to say about that -- but the wisdom is usually counter-intuitive. Most
people struggle with great design because the difference between a great
design and a good design is often remarkably subtle, and often the
approaches are polar opposites. I remember a quote in Alan Kay's *Early
History of Smalltalk* that has stuck with me for a long time:

One of the interested features of NLS was that its user interface was a
parametric and could be supplied by the end user in the form of a "grammar
of interaction given in their compiler-compiler TreeMeta. This was similar
to William Newman's early "Reaction Handler" [Newman 66] work in specifying
interfaces by having the end-user or developer construct through tablet and
stylus an iconic regular expression grammar with action procedures at the
states (NLS allowed embeddings via its context free rules). This was
attractive in many ways, particularly William's scheme, but *to me there
was a monstrous bug in this approach.* *Namely, these grammars forced the
user to be in a system state which required getting out of before any new
kind of interaction could be done.* In hierarchical menus or "screens" one
would have to backtrack to a master state in order to go somewhere else. *What
seemed to be required were states in which there was a transition arrow to
every other state--not a fruitful concept in formal grammar theory. In other
words, a much "flatter" interface seemed called for--but could such a thing
be made interesting and rich enough to be useful? *
That's a GREAT question. How do you turn a set of design constraints that
is boring in formal grammar theory academic circles and make it so such a
model is actually useful? One idea could be that it isn't actually a boring
formal grammar theory question. Maybe we're just not phrasing the question
the right way? Perhaps we haven't closely looked at the problem domain, and
so it only is boring from a superficial distance.
It's been said that each generation thinks that it invented sex.
Could the same be said of depression?
Post by John Zabroski
I personally do not believe technology actually improves lives. Usually,
it is the opposite. Technology creates instant gratification and addiction
to it thereof, and the primary reason we are so addicted to technology is
because we have become so empty inside.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Paul D. Fernhout
2010-07-13 18:12:13 UTC
Permalink
Post by John Zabroski
I personally do not believe technology actually improves lives. Usually, it
is the opposite. Technology creates instant gratification and addiction to
it thereof, and the primary reason we are so addicted to technology is
because we have become so empty inside.
Related to your point, but from a different direction, here are two books on
the theory of where things went so wrong in our society, by two different
authors who say essentially the same thing but don't cite each other. :-)

"The Pleasure Trap: Mastering the Hidden Force That Undermines Health &
Happiness"
http://www.amazon.com/Pleasure-Trap-Mastering-Undermines-Happiness/dp/1570671508
http://www.healthpromoting.com/Articles/articles/PleasureTrap.htm

"Supernormal Stimuli: How Primal Urges Overran Their Evolutionary Purpose"
http://www.amazon.com/Supernormal-Stimuli-Overran-Evolutionary-Purpose/dp/039306848X
http://en.wikipedia.org/wiki/Supernormal_Stimuli

The basic idea is that humans seek various things that kept us healthy and
yet were rare in the past. However, because our technology has made some
things abundant now (like concentrated calories with few phytonutrients
http://www.drfuhrman.com or getting novelty that we want without going
outside http://www.vitamindcouncil.org/treatment.shtml or like novelty or
strong emotion in media) we become desensitized to normal healthy stimuli
and begin to crave the supernormal stimuli as the new normal. But,
ironically, that seeking of supernormal stimuli is to no lasting net
increase in happiness (since we return to a baseline of enjoyment), and
meanwhile the supernormal stimulus often causes health problems (example,
too much salt, too much fat, too much raw sugar or too much media violence
etc.).

Those ideas were developed first about food and then about media, but it
makes me wonder how those ideas could be applied to healthy computing in
general?

One obvious example to me is the proliferation of eye candy and pointless
moving graphics in computer interfaces that ultimately just distracts from
usability, but is really novel and intriguing for a brief time. The plainer
interfaces may then get ignored by people used to the new flashy ones, even
if the flashy ones may be less usable over the long term.

I wonder if one could say the same about programming languages, like the
plain Smalltalk syntax against the whiz bang kitchen-sink Java syntax that
seems fancier at first?

Or, as many people enjoy coding as a logical challenge that keeps us in a
sense of flow or provides us with various other rewards, could we be
drowning ourselves in too much code? :-) I've long thought that 99%+ of
software out there is not only unnecessary, but just makes excessive work
for everyone to deal with the incompatabilities. :-) Also related on how so
much of our socio-industrial base is about unnecessary "make work":
http://www.whywork.org/rethinking/whywork/abolition.html

There are obviously a host of other reasons why things get complicated and
complex; I'm just suggesting this as one reason that is not yet discussed
that much.

Anyway, in that sense, could FONC be seen as trying from a computing
perspective to address "The Pleasure Trap" of technology leading us,
counter-productively, to ultimately be drowning in endless make-work code
and tons of fancy but unneeded widgets? And where the end result of that
pleasure trap of all the eye candy or syntactic sugar is that we are
ultimately worse off than if we stuck with simpler systems, like, say, an
extensible Forth command line? :-)

--Paul Fernhout
http://www.pdfernhout.net/
http://www.beyondajoblessrecovery.org/
====
The biggest challenge of the 21st century is the irony of technologies of
abundance in the hands of those thinking in terms of scarcity.
Antoine van Gelder
2010-07-14 09:49:44 UTC
Permalink
Anyway, in that sense, could FONC be seen as trying from a computing perspective to address "The Pleasure Trap" of technology leading us, counter-productively, to ultimately be drowning in endless make-work code and tons of fancy but unneeded widgets? And where the end result of that pleasure trap of all the eye candy or syntactic sugar is that we are ultimately worse off than if we stuck with simpler systems, like, say, an extensible Forth command line? :-)
Smalltalk, evolutionary psychology and Forth, how can anyone resist? Thank you for a great read Paul!

Quoting from Leo Brodie's excellent "Thinking in Forth":

--

We asked Moore (Chuck): "How long should a Forth definition be?"

A word should be a line long. That's the target.

When you have a whole lot of words that are all useful in their own
right -- perhaps in debugging or exploring, but inevitably there's a
reason for their existence -- you feel you've extracted the essence
of the problem and that those words have expressed it.

Short words give you a good feeling.

--

When practicing deeply the wrangling of projects with millions of LOC it is very easy to get stressed out around the questions surrounding a platonic definition of 'simplicity.'

Questions such as "how do we define a downward trajectory?" or "which direction is simple in?" or even "how can we even possibly hope to measure simple?!"

I don't think those questions have an answer as long as we are asking:

"Can we write tools that let us practice the kind of 'Computer Science' or 'Software Engineering' that produces millions of lines of code without ending up with complex code?"

Much like I think there is no answer to the question of:

"How do I change my diet so I can keep eating too much without getting fat?"

On the other hand, this question may well have an answer:

"Can we write tools that reduce the complexity of the models we use to describe the object of our Software?"

Okay, so the answer to that question is usually a blunt:

"What you meant to say is: 'Sir, please may I write some tools...' and the answer is: 'No! You may not!'"

Or to bring it back to t-shirt slogans:

"That bastard Kepler did nothing to solve my epicycle problem."

- antoine


ps. Make no mistake. Paul is right on the money. Even if no-one wants to 'fess up most of us _are_ being measured (implicitly) on SLOC. Our managers may use different words. But the effect is the same. 'cos the cause is the same! :-)
Reuben Thomas
2010-07-14 10:49:38 UTC
Permalink
Post by Antoine van Gelder
Questions such as "how do we define a downward trajectory?" or
"which direction is simple in?" or even "how can we even possibly hope
to measure simple?!"
There is nothing hard about simplification per se. I don't think I've
made a significant contribution to any software project that hasn't
been net negative in LOC. The problem is that this often doesn't make
a significant dent in the overall size (and hence, usually,
complexity) of the project.

However, you're quite right to point to the difficulties of measuring
complexity. What one can intuitively recognise and fix is not
necessarily easy to measure. I can imagine automatic measurements of
several things that are easy to measure, but I've never seen tools
for: average length of function in LOC, average number of function
calls per function, number of callees, number of callers. But things
can get complicated. For example, a long function with few calls might
seem like a classic case of bad programming...or it might be the sort
of what I call "narrative code" that is typical in initialisation and
shut-down routines, and sometimes pops up elsewhere. In either case,
it is unlikely to be complex *in itself* (in the absence of gotos),
but if it should have been decomposed, then it's imposing complexity
elsewhere.

The classic case of a little complexity imposed everywhere is the code
base that is too little abstracted. I spent a couple of years working
on a proprietary application that consisted of a core of about 30
kLOC, with a plethora of application modules written on top that
totalled about 1 MLOC. Reading and writing application code made my
head hurt, because doing simple things was hard, but none of the code
was actually that complex (the system was basically a database with
lots of customised forms). But improving an application was a very
small win, as it only improved that one module.

My three main efforts over the time I worked on that system were first
to make the core portable, largely just a matter of removing #ifdefs
and relying on the fact that the systems on which it built had all
started supporting the same standards since the code was first
written, secondly, to remove dead code (e.g. an old version of a
reporting system, because although the new version was an "optional
extra" that customers paid for, it was itself almost obsolete, and
company records showed that all current customers paid for it), and
thirdly to write a scripting language skin for the application system,
to make it possible to write applications in much less code (very
ill-advised, as it turned out, as I was never given the resources to
train other programmers in its use, so when I left the company had not
only a useless scripting system with extra build- and run-time
dependencies for our system, but an important application (which I
wrote) written in a language that no-one else at the company knew).

"Insufficient abstraction" may be an extreme example of complexity
thati's hard to measure, but even kinds like "duplicate code" are
tricky. I recently found, by inspection, a duplicate function that has
been lurking in a code base of ~10 kLOC that I've been rewriting for
years (the text editor GNU Zile; it started off at ~20 kLOC). It is
really only because I'm currently translating the entire application
into another language (from C to Lua) that I was a) reading the code
and b) had enough context in my head to realise that the code I was
looking at was redundant. In fact, I've managed to remove hundreds of
lines of C code during this translation project, and that is despite
the fact that I had already spent probably months trying refactoring
and simplifying the existing code. It is interesting to speculate
whether I would have found these simplifications had I not been
actually translating rather than simply reading the code.
Post by Antoine van Gelder
ps. Make no mistake. Paul is right on the money. Even if no-one wants
to 'fess up most of us _are_ being measured (implicitly) on SLOC.
This seems right. In my case, this was expressed as concern that I was
not producing enough application code (my assigned work) and was
instead spending my time re-engineering the code (what the company
actually needed, in my view, but then my view was biased as it was far
more interesting). This despite the fact that more than one of my bits
of re-engineering saved serious corporate bacon.

However, the underlying cause is that the company was directly earning
revenue from developments to the product (new applications). I
theorise that even sticking with SLOC as a primary productivity
measure, had they actually been able to measure my effect on the other
programmers' productivity, I might've got away with doing what I did.

Our customers had the same problem: easy to say "we need a new app
that does this"; much harder to say "we need to improve the way we do
that". The cultural factors at work are all too obvious. Since there's
an obvious link between new features and SLOC, the latter's tyranny
follows.
--
http://rrt.sc3d.org
Max OrHai
2010-07-08 16:10:12 UTC
Permalink
I think Ryan has best articulated what it's all about for me anyway:
"regaining control of our technology". Simplicity and clarity are, to some
extent, their own imperative. That's nothing new: Occam's Razor has long
been the dominant aesthetic in mathematics and the natural sciences at
least. In a world such as ours where all human endeavors are increasingly
influenced (often unintentionally) by technological concerns, I feel it as a
moral imperative as well.

A computer is a necessary tool for engaging with the modern world
of human knowledge and culture. A truly personal computer should be fully
understandable and extensible, inside and out, by its individual users,
without the users having to devote a disproportionate amount of effort to
this understanding. If these users thereby become more "productive", that's
great too, but I don't think that's the major goal.

-- Max
Post by Ryan Mitchley
I would imagine that the goals align with the task of "augmenting human
intellect", to borrow Engelbart's phrase.
The STEPS project, in particular, seems concerned with compact
representations that approach the entropies of the systems being
simulated. Computing, to me, anyway, is very closely linked to
simulation. A compact representation is (hopefully) easier understand,
thus making it suitable for educational purposes. However, it should
also be more computationally efficient, as well as enabling greater
productivity.
I think it's also about regaining control of our technology. A modern
computer system is composed of layer upon layer of ad hoc mechanics,
short on architecture and long on details. There are few people who have
a truly good understanding of the complete system from firmware to UI,
including all the details in between, and it's not because the details
are fundamentally complex - they simply involve huge amounts of rote
learning. Something like Linux has grown somewhat organically, without
any of the robustness that organic growth might imply.
Given concerns about security and privacy - not to mention demonstrable
correctness of operation - an easily decomposable, understandable system
is hugely desirable. There should be bonus side effects, such as running
well on lightweight mobile devices.
I hope to see computing systems becoming vehicles for training
intelligent agents that assist human endeavours - by automating menial
tasks, freeing humans to concentrate on more interesting problems, while
also leveraging the abilities that are trivial for computers, but hard
for humans (large scale data processing, correlation and statistical
analysis, particle simulation, etc.). I also hope to see more of the
abilities that have traditionally been described as A.I. entering
mainstream computation (goal-seeking behaviour, probabilistic reasoning).
Disclaimer: http://www.peralex.com/disclaimer.html
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
spir
2010-07-08 17:03:19 UTC
Permalink
On Thu, 8 Jul 2010 09:10:12 -0700
Post by Max OrHai
A computer is a necessary tool for engaging with the modern world
of human knowledge and culture. A truly personal computer should be fully
understandable and extensible, inside and out, by its individual users,
without the users having to devote a disproportionate amount of effort to
this understanding. If these users thereby become more "productive", that's
great too, but I don't think that's the major goal.
(devil's advocate)
That's precisely why I wash my clothes by hand.
(my competence in mechanics, electrotechnics & thermodynamics is far too limited to fully master a washing machine -- not to speak of embedded digital control systems)

Denis
________________________________

vit esse estrany ☣

spir.wikidot.com
Mark Haniford
2010-07-08 17:52:38 UTC
Permalink
I agree with the whole mental masturbation thing. Unless something
is produced and actually increases productivity then it's been a waste
of time. Frankly, I don't see anything substantial every coming out
of this project. It's just an academic exercise. Sorry for the
harshness.
Post by BGB
(pardon the top-post)
granted, I probably don't speak for others here, who may have differing
opinions, I just speak for myself...
I am not formally involved with the project in question here, but work on
some of my own stuff in a similar domain (VM and compiler technology).
well, that is the question sometimes...
but, anyways, being useful is the real eventual goal in anything.
otherwise, what does it amount to besides mental masturbation and people
congratulating their own ego / "intellect" (I have seen this before, mostly
in math and physics circles, like lacking any real value in what they are
doing, they praise themselves over how "intelligent" or "insightful" they
are vs everyone else... IMO, this is distasteful and serves no real
purpose...).
granted, one congratulating their own efforts isn't much better...
essentially, it is like one bowing before an idol made by their own hands.
(it is at least meaningful if one can do something and admit openly that it
is all a big pile of crap...).
now, as for useful to who?... maybe that is another part of the question.
maybe if my stuff is at least useful to myself, that is a starting point,
but even this is a difficult matter sometimes. if something can be useful to
others, this is better, or generally improving matters as a whole, that is
better still.
personally though, I see little need to "reinvent" the world, more just a
matter of fixing all these little problems that keep popping up, and maybe
adding a few more tools to the toolbox.
it is notable how much even seemingly trivial matters, like having a tool to
automatically write ones' C & C++ headers for them, ... can make to the
overall programming experience. like, before, there is this seemingly
endless annoyance of having to add prototypes for any new functions into
their headers, and a simple tool (of maybe < 500 loc), can cause this matter
to almost entirely disappear.
some big elaborate "solution" may really do little against these problems,
as what we have is not so much monumental problems, so much as they are
monuments of pebbles. some grand scheme will not necessarily move such a
mountain, but something as simple as a shovel might just do the job.
and with some amount of shoveling, one may just end up moving a mountain...
much like the annoyance of how people say things are "impossible", when
really, they are far from being impossible, but maybe they are a little bit
of effort.
it is like, doing dynamic compilation (like, eval and similar) in C, or
adding many reflection type features.
there is no "magic algorithm" to make this work, but an "ugly mess of code"
pulls it all off fairly well.
likewise goes for more established technologies, like GC, dynamic types, and
lexical closures.
as can be said, "just do it...".
or, at least, this is just my opinion on the matter...
others may feel free to disagree or offer alternate opinions...
Sent: Thursday, July 08, 2010 1:34 AM
Subject: [fonc] goals
What do the folks here see as the goals of "new computing"?
Is it to find ways to use technology to help people be more productive?
Is it more about education? Is it about maximizing MIPS/Watt? Something else entirely?
My impression (which may be wrong) is that most of we think of in retrospect
as the really great stuff (PARC, Sutherland and Doug Engelbart's group) was
born from environments with goals of increasing productivity of real labor.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Antoine van Gelder
2010-07-08 17:05:26 UTC
Permalink
Post by Steve Dekorte
What do the folks here see as the goals of "new computing"?
Is it to find ways to use technology to help people be more productive?
Is it more about education? Is it about maximizing MIPS/Watt? Something else entirely?
My impression (which may be wrong) is that most of we think of in retrospect as the really great stuff (PARC, Sutherland and Doug Engelbart's group) was born from environments with goals of increasing productivity of real labor.
From what I have read Doug Engelbart was interested more in using
computers to enable folk to think smarter than any increase in labor
productivity :)

I like how it is phrased on the Doug Engelbart Institute website:

"As he saw it, both the rate and the scale of change are increasing
very rapidly worldwide, and we as a people must get that much faster
and smarter at anticipating, assessing, and responding to important
challenges collectively if we are to stay ahead of the curve, and
thrive as a planet. In other words, we must get faster and smarter
at boosting our Collective IQ."

-- http://dougengelbart.org/

- antoine
Continue reading on narkive:
Loading...