Discussion:
[fonc] Task management in a world without apps.
Casey Ransberger
2013-10-31 15:29:34 UTC
Permalink
A fun, but maybe idealistic idea: an "application" of a computer should just be what one decides to do with it at the time.

I've been wondering how I might best switch between "tasks" (or really things that aren't tasks too, like toys and documentaries and symphonies) in a world that does away with most of the application level modality that we got with the first Mac.

The dominant way of doing this with apps usually looks like either the OS X dock or the Windows 95 taskbar. But if I wanted less shrink wrap and more interoperability between the virtual things I'm interacting with on a computer, without forcing me to "multitask" (read: do more than one thing at once very badly,) what's my best possible interaction language look like?

I would love to know if these tools came from some interesting research once upon a time. I'd be grateful for any references that can be shared. I'm also interested in hearing any wild ideas that folks might have, or great ideas that fell by the wayside way back when.

Out of curiosity, how does one change one's "mood" when interacting with Frank?

Casey
David Barbour
2013-10-31 15:58:44 UTC
Permalink
Instead of 'applications', you have objects you can manipulate (compose,
decompose, rearrange, etc.) in a common environment. The state of the
system, the construction of the objects, determines not only how they
appear but how they behave - i.e. how they influence and observe the world.
Task management is then simply rearranging objects: if you want to turn an
object 'off', you 'disconnect' part of the graph, or perhaps you flip a
switch that does the same thing under the hood.

This has very physical analogies. For example, there are at least two ways
to "task manage" a light: you could disconnect your lightbulb from its
socket, or you could flip a lightswitch, which opens a circuit.

There are a few interesting classes of objects, which might be described as
'tools'. There are tools for your hand, like different paintbrushes in
Paint Shop. There are also tools for your eyes/senses, like a magnifying
glass, x-ray goggles, heads-up display, events notification, or language
translation. And there are tools that touch both aspects - like a
projectional editor, lenses. If we extend the user-model with concepts like
'inventory', and programmable tools for both hand and eye, those can serve
as another form of task management. When you're done painting, put down the
paintbrush.

This isn't really the same as switching between tasks. I.e. you can still
get event notifications on your heads-up-display while you're editing an
image. It's closer to controlling your computational environment by direct
manipulation of structure that is interpreted as code (aka live
programming).

Best,

Dave
Post by Casey Ransberger
A fun, but maybe idealistic idea: an "application" of a computer should
just be what one decides to do with it at the time.
I've been wondering how I might best switch between "tasks" (or really
things that aren't tasks too, like toys and documentaries and symphonies)
in a world that does away with most of the application level modality that
we got with the first Mac.
The dominant way of doing this with apps usually looks like either the OS
X dock or the Windows 95 taskbar. But if I wanted less shrink wrap and more
interoperability between the virtual things I'm interacting with on a
computer, without forcing me to "multitask" (read: do more than one thing
at once very badly,) what's my best possible interaction language look like?
I would love to know if these tools came from some interesting research
once upon a time. I'd be grateful for any references that can be shared.
I'm also interested in hearing any wild ideas that folks might have, or
great ideas that fell by the wayside way back when.
Out of curiosity, how does one change one's "mood" when interacting with Frank?
Casey
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Alan Kay
2013-10-31 16:31:02 UTC
Permalink
It's worth noting that this was the scheme at PARC and was used heavily later in Etoys.

This is why Smalltalk has unlimited numbers of "Projects". Each one is a persistant environment that serves both as a place to make things and as a "page" of "desktop media".

There are no apps, only objects and any and all objects can be brought to any project which will preserve them over time. This avoids the stovepiping of apps. Dan Ingalls (in Fabrik) showed one UI and scheme to integrate the objects, and George Bosworth's PARTS system showed a similar but slightly different way.

Also there is no "presentation app" in Etoys, just an object that allows projects to be put in any order -- and there can many many such orderings all preserved -- and there is an object that will move from one project to the next as you give your talk. "Builds" etc are all done via Etoy scripts.

This allows the full power of the system to be used for everything, including presentations. You can imagine how appalled we were by the appearance of Persuade and PowerPoint, etc.

Etc.

We thought we'd done away with both "operating systems" and with "apps" but we'd used the wrong wood in our stakes -- the vampires came back in the 80s.

One of the interesting misunderstandings was that Apple and then MS didn't really understand the universal viewing mechanism (MVC) so they thought views with borders around them were "windows" and view without borders were part of "desktop publishing", but in fact all were the same. The Xerox Star confounded the problem by reverting to a single desktop and apps and missed the real media possibilities.

They divided a unified media world into two regimes, neither of which are very good for end-users.

Cheers,

Alan
Post by Casey Ransberger
________________________________
Sent: Thursday, October 31, 2013 8:58 AM
Subject: Re: [fonc] Task management in a world without apps.
Instead of 'applications', you have objects you can manipulate (compose, decompose, rearrange, etc.) in a common environment. The state of the system, the construction of the objects, determines not only how they appear but how they behave - i.e. how they influence and observe the world. Task management is then simply rearranging objects: if you want to turn an object 'off', you 'disconnect' part of the graph, or perhaps you flip a switch that does the same thing under the hood. 
This has very physical analogies. For example, there are at least two ways to "task manage" a light: you could disconnect your lightbulb from its socket, or you could flip a lightswitch, which opens a circuit.
There are a few interesting classes of objects, which might be described as 'tools'. There are tools for your hand, like different paintbrushes in Paint Shop. There are also tools for your eyes/senses, like a magnifying glass, x-ray goggles, heads-up display, events notification, or language translation. And there are tools that touch both aspects - like a projectional editor, lenses. If we extend the user-model with concepts like 'inventory', and programmable tools for both hand and eye, those can serve as another form of task management. When you're done painting, put down the paintbrush.
This isn't really the same as switching between tasks. I.e. you can still get event notifications on your heads-up-display while you're editing an image. It's closer to controlling your computational environment by direct manipulation of structure that is interpreted as code (aka live programming).
Best,
Dave
A fun, but maybe idealistic idea: an "application" of a computer should just be what one decides to do with it at the time.
Post by Casey Ransberger
I've been wondering how I might best switch between "tasks" (or really things that aren't tasks too, like toys and documentaries and symphonies) in a world that does away with most of the application level modality that we got with the first Mac.
The dominant way of doing this with apps usually looks like either the OS X dock or the Windows 95 taskbar. But if I wanted less shrink wrap and more interoperability between the virtual things I'm interacting with on a computer, without forcing me to "multitask" (read: do more than one thing at once very badly,) what's my best possible interaction language look like?
I would love to know if these tools came from some interesting research once upon a time. I'd be grateful for any references that can be shared. I'm also interested in hearing any wild ideas that folks might have, or great ideas that fell by the wayside way back when.
Out of curiosity, how does one change one's "mood" when interacting with Frank?
Casey
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Chris Warburton
2013-10-31 17:37:47 UTC
Permalink
Post by Alan Kay
One of the interesting misunderstandings was that Apple and then MS
didn't really understand the universal viewing mechanism (MVC) so they
thought views with borders around them were "windows" and view without
borders were part of "desktop publishing", but in fact all were the
same.
When we design an environment/framework, there are always tradeoffs
to make when deciding what capabilities to include in the medium. A
common problem is capabilities becoming obsolete and being worked
around, for example many filesystems have provided metadata facilities
over the years, but these have all hit limits which end up being worked
around by storing metadata in files, making the FS unnecessarily
complex. Another problem is restricting the technology which can be used
by clients; for example browsers will only run Javascript, which made
them 'toys' for many years in the eyes of C/C++/Java programmers.

Unfortunately, a big factor is also the first-to-market pressure,
otherwise known as 'Worse Is Better': you can reduce the effort required
to implement a system by increasing the effort required to use it. The
classic example is C vs LISP, but a common one these days is
multithreading vs actors, coroutines, etc.

In the case of an OS, providing a dumb box to draw on is much easier
than a complete, complementary suite of MVC/Morphic/etc. components,
even though developers are forced to implement their own incompatible
integration layers, if they bother at all.

This is why I'm not a fan of HTML5 canvas, since it's a dumb box which
strips away the precious-little semantics the Web has, and restrict
mashups to little more than putting existing boxes next to each other.

Cheers,
Chris

PS: I spent one summer living in Etoys on my OLPC XO-1, creating physics
simulations. It's a very nice system once it's gotten used to. It's one
thing to drag and drop tiles to make a scribbled picture start spinning,
it's quite another to make the tiles themselves start spinning :)

PPS: I keep meaning to pull those those simulations off my XO and upload
them to squeakland. Unfortunately I reached a point where they maxed out
the RAM so I couldn't finish them :(
Reuben Thomas
2013-10-31 17:47:00 UTC
Permalink
…many filesystems have provided metadata facilities
over the years, but these have all hit limits which end up being worked
around by storing metadata in files, making the FS unnecessarily
complex.
ReiserFS, from at least version 3, implemented extended attributes (xattrs)
as directories, meaning that there were no arbitrary limits. I never
understood why other FSes didn't do the same, as it surely makes the code
simpler than having a special fixed-size implementation.

However, having thought about ResierFS, the idea of xattrs themselves seem
pretty odd, because what is data and what is metadata can depend on
context, plus of course you're never going to get all the metadata out of
files and into FS attributes; and it seems odd to have two different
fundamental APIs (regular file API and xattr API) to manipulate different
instances of the same on-disk representation (files in directories).
David Leibs
2013-10-31 17:50:29 UTC
Permalink
Hi Chris,
I get your point but I have really grown to dislike that phrase "Worse is Better". Worse is never better. Worse is always worse and worse never reduces to better under any set of natural rewrite rules. Yes there are advantages in the short term to being first to market and things that are worse can have more mindshare in the arena of public opinion.

"Worse is Better" sounds like some kind of apology to me.

cheers,
-David Leibs
Post by Chris Warburton
Unfortunately, a big factor is also the first-to-market pressure,
otherwise known as 'Worse Is Better': you can reduce the effort required
to implement a system by increasing the effort required to use it. The
classic example is C vs LISP, but a common one these days is
multithreading vs actors, coroutines, etc.
David Barbour
2013-10-31 18:10:44 UTC
Permalink
The phrase "Worse is better" involves an equivocation - the 'worse' and
'better' properties are applied in completely different domains (technical
quality vs. market success). But, hate it or not, it is undeniable that
"worse is better" philosophy has been historically successful.
Post by David Leibs
Hi Chris,
I get your point but I have really grown to dislike that phrase "Worse is
Better". Worse is never better. Worse is always worse and worse never
reduces to better under any set of natural rewrite rules. Yes there are
advantages in the short term to being first to market and things that are
worse can have more mindshare in the arena of public opinion.
"Worse is Better" sounds like some kind of apology to me.
cheers,
-David Leibs
Unfortunately, a big factor is also the first-to-market pressure,
otherwise known as 'Worse Is Better': you can reduce the effort required
to implement a system by increasing the effort required to use it. The
classic example is C vs LISP, but a common one these days is
multithreading vs actors, coroutines, etc.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
David Leibs
2013-10-31 18:16:05 UTC
Permalink
In the spirit of equivocation when I look at the world we live in and and note the trends then I feel worse, not better.

-David Leibs
The phrase "Worse is better" involves an equivocation - the 'worse' and 'better' properties are applied in completely different domains (technical quality vs. market success). But, hate it or not, it is undeniable that "worse is better" philosophy has been historically successful.
Hi Chris,
I get your point but I have really grown to dislike that phrase "Worse is Better". Worse is never better. Worse is always worse and worse never reduces to better under any set of natural rewrite rules. Yes there are advantages in the short term to being first to market and things that are worse can have more mindshare in the arena of public opinion.
"Worse is Better" sounds like some kind of apology to me.
cheers,
-David Leibs
Post by Chris Warburton
Unfortunately, a big factor is also the first-to-market pressure,
otherwise known as 'Worse Is Better': you can reduce the effort required
to implement a system by increasing the effort required to use it. The
classic example is C vs LISP, but a common one these days is
multithreading vs actors, coroutines, etc.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
David Barbour
2013-10-31 18:40:43 UTC
Permalink
It can be depressing, certainly, to look at the difference between "where
we are" and "where we could be, if we weren't short-sighted and greedy".
OTOH, if you look at "where we are" vs. "where we were", I think you can
find a lot to be optimistic about. FP and types have slowly wormed their
way into many PLs. Publish-subscribe is gaining mindshare. WebRTC, HTML
Canvas, WebSockets, etc. have finally resulted in a widespread VMs people
are actually willing to use (even if they could be better).
Post by David Leibs
In the spirit of equivocation when I look at the world we live in and and
note the trends then I feel worse, not better.
-David Leibs
The phrase "Worse is better" involves an equivocation - the 'worse' and
'better' properties are applied in completely different domains (technical
quality vs. market success). But, hate it or not, it is undeniable that
"worse is better" philosophy has been historically successful.
Post by David Leibs
Hi Chris,
I get your point but I have really grown to dislike that phrase "Worse is
Better". Worse is never better. Worse is always worse and worse never
reduces to better under any set of natural rewrite rules. Yes there are
advantages in the short term to being first to market and things that are
worse can have more mindshare in the arena of public opinion.
"Worse is Better" sounds like some kind of apology to me.
cheers,
-David Leibs
Unfortunately, a big factor is also the first-to-market pressure,
otherwise known as 'Worse Is Better': you can reduce the effort required
to implement a system by increasing the effort required to use it. The
classic example is C vs LISP, but a common one these days is
multithreading vs actors, coroutines, etc.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Chris Warburton
2013-11-01 10:32:28 UTC
Permalink
Post by David Leibs
Hi Chris,
I get your point but I have really grown to dislike that phrase "Worse
is Better". Worse is never better. Worse is always worse and worse
never reduces to better under any set of natural rewrite rules. Yes
there are advantages in the short term to being first to market and
things that are worse can have more mindshare in the arena of public
opinion.
"Worse is Better" sounds like some kind of apology to me.
cheers,
-David Leibs
I don't see it as an apology, I use it as an insult. To me, "worse is
better" is the answer to questions like "If SomeLanguage is so much
better than C, why is everyone still using C?".

The reason so many people use C is because so many people use C (and the
same goes for Windows, multithreading, HTML, x86, etc.); there's a
feedback loop. To describe something as "Worse is Better" , we're
basically saying that this loop is more powerful than others'.

One acceptable reason for this is because it got there first, which may
mean it's lacking subsequent improvements, but it's not the end of the
world. That's how progress is made, after all.

One (IMHO) unacceptable reason for this is because of
incompatibility. "Embrace, Extend, Extinguish" is an example, where
product A is open to compatibility but product B is not; B can offer
compatibility with A, but A cannot offer compatibility with B, so
everyone switches to B as the 'most compatible'.

Another (IMHO) unacceptable reason is a deceptively low barrier to
entry. As an example, in a course I took at university, multithreading
in Java was introduced as 'just writing a class with a "run"
method'. This is deceptive, since multithreading invalidates all kinds
of assumptions which were safe to make in single-threaded Java
code. This can make one technology look simpler and cheaper to invest in
than another, when actually it has a large cost further down the
line. By that point a project may be irrevocably invested in that
technology.

None of these are features, but they do explain why it's hard to replace
incumbents. Hence if a technology's most compelling feature is 'Worse is
Better', that's basically saying it has no compelling features.

Cheers,
Chris
David Barbour
2013-10-31 19:42:53 UTC
Permalink
Post by Chris Warburton
In the case of an OS, providing a dumb box to draw on is much easier
than a complete, complementary suite of MVC/Morphic/etc. components,
even though developers are forced to implement their own incompatible
integration layers, if they bother at all.
This is why I'm not a fan of HTML5 canvas, since it's a dumb box which
strips away the precious-little semantics the Web has, and restrict
mashups to little more than putting existing boxes next to each other.
Chris Warburton
2013-11-01 10:37:40 UTC
Permalink
Post by Chris Warburton
In the case of an OS, providing a dumb box to draw on is much easier
than a complete, complementary suite of MVC/Morphic/etc. components,
even though developers are forced to implement their own incompatible
integration layers, if they bother at all.
This is why I'm not a fan of HTML5 canvas, since it's a dumb box which
strips away the precious-little semantics the Web has, and restrict
mashups to little more than putting existing boxes next to each other.
There is "worse is better", but there also is "less is more".
Josh Grams
2013-11-01 10:17:44 UTC
Permalink
My dislike of canvas is that it arrogantly presumes that a user agent
has a display to be formatted.
I used to develop a CMS called ocPortal which got lots of praise from
its blind users.
OK, now I'm completely lost. Isn't canvas for primarily graphical
applications (videogames, drawing stuff, etc.) where it would be
impossible to make it usable for a blind person anyway? I mean,
sure, people will abuse it and use it for other things, but that's
true of any technology; there's no way around that, is there?

What am I missing?

--Josh
Tom Novelli
2013-11-03 17:07:40 UTC
Permalink
Post by Josh Grams
My dislike of canvas is that it arrogantly presumes that a user agent
has a display to be formatted.
I used to develop a CMS called ocPortal which got lots of praise from
its blind users.
OK, now I'm completely lost. Isn't canvas for primarily graphical
applications (videogames, drawing stuff, etc.) where it would be
impossible to make it usable for a blind person anyway? I mean,
sure, people will abuse it and use it for other things, but that's
true of any technology; there's no way around that, is there?
What am I missing?
--Josh
Take it from a game developer - you're not missing anything. Canvas is for
realtime raster graphics, that's it. Bypassing the web's usual
structure/layout is its whole purpose.

-Tom
David Barbour
2013-10-31 18:36:04 UTC
Permalink
Alan,

I appreciate the peek into history! I had to look up Fabrik and PARTS. I
love the idea of running presentations as live coding; in fact, I shall
endeavor to do so for any talks I give regarding my own system.

Smalltalk has a lot of good ideas, but they're sometimes mixed with
not-so-great ideas and difficult to separate. Even today, the idea of
"applications as objects in the IDE" gives results in a knee-jerk rejection
response from many people who fear a tight coupling ("to share the app, I
need to share the whole IDE!") based largely on Smalltalk's example.
Language-layer security and an alternative state model could address this
issue, enabling easy decoupling of behavior from environment. Similarly,
MVC has several properties that I believe have been more harmful than
helpful. Models in MVC systems are neither compositional nor open, and
controls were decoupled from views, which hinders direct manipulation and
physical metaphors. More modern variations such as MVVM are improvements,
but they're still a long way from collaborative projectional editors or
spreadsheets.

But the good ideas should be preserved, separated from the chaff, reused in
new contexts. It's interesting to pick apart history and hypothesize why
various good ideas have failed to gain traction.

Best,

Dave
Post by Alan Kay
It's worth noting that this was the scheme at PARC and was used heavily later in Etoys.
This is why Smalltalk has unlimited numbers of "Projects". Each one is a
persistant environment that serves both as a place to make things and as a
"page" of "desktop media".
There are no apps, only objects and any and all objects can be brought to
any project which will preserve them over time. This avoids the stovepiping
of apps. Dan Ingalls (in Fabrik) showed one UI and scheme to integrate the
objects, and George Bosworth's PARTS system showed a similar but slightly
different way.
Also there is no "presentation app" in Etoys, just an object that allows
projects to be put in any order -- and there can many many such orderings
all preserved -- and there is an object that will move from one project to
the next as you give your talk. "Builds" etc are all done via Etoy scripts.
This allows the full power of the system to be used for everything,
including presentations. You can imagine how appalled we were by the
appearance of Persuade and PowerPoint, etc.
Etc.
We thought we'd done away with both "operating systems" and with "apps"
but we'd used the wrong wood in our stakes -- the vampires came back in the
80s.
One of the interesting misunderstandings was that Apple and then MS didn't
really understand the universal viewing mechanism (MVC) so they thought
views with borders around them were "windows" and view without borders were
part of "desktop publishing", but in fact all were the same. The Xerox Star
confounded the problem by reverting to a single desktop and apps and missed
the real media possibilities.
They divided a unified media world into two regimes, neither of which are
very good for end-users.
Cheers,
Alan
------------------------------
*Sent:* Thursday, October 31, 2013 8:58 AM
*Subject:* Re: [fonc] Task management in a world without apps.
Instead of 'applications', you have objects you can manipulate (compose,
decompose, rearrange, etc.) in a common environment. The state of the
system, the construction of the objects, determines not only how they
appear but how they behave - i.e. how they influence and observe the world.
Task management is then simply rearranging objects: if you want to turn an
object 'off', you 'disconnect' part of the graph, or perhaps you flip a
switch that does the same thing under the hood.
This has very physical analogies. For example, there are at least two ways
to "task manage" a light: you could disconnect your lightbulb from its
socket, or you could flip a lightswitch, which opens a circuit.
There are a few interesting classes of objects, which might be described
as 'tools'. There are tools for your hand, like different paintbrushes in
Paint Shop. There are also tools for your eyes/senses, like a magnifying
glass, x-ray goggles, heads-up display, events notification, or language
translation. And there are tools that touch both aspects - like a
projectional editor, lenses. If we extend the user-model with concepts like
'inventory', and programmable tools for both hand and eye, those can serve
as another form of task management. When you're done painting, put down the
paintbrush.
This isn't really the same as switching between tasks. I.e. you can still
get event notifications on your heads-up-display while you're editing an
image. It's closer to controlling your computational environment by direct
manipulation of structure that is interpreted as code (aka live
programming).
Best,
Dave
On Thu, Oct 31, 2013 at 10:29 AM, Casey Ransberger <
A fun, but maybe idealistic idea: an "application" of a computer should
just be what one decides to do with it at the time.
I've been wondering how I might best switch between "tasks" (or really
things that aren't tasks too, like toys and documentaries and symphonies)
in a world that does away with most of the application level modality that
we got with the first Mac.
The dominant way of doing this with apps usually looks like either the OS
X dock or the Windows 95 taskbar. But if I wanted less shrink wrap and more
interoperability between the virtual things I'm interacting with on a
computer, without forcing me to "multitask" (read: do more than one thing
at once very badly,) what's my best possible interaction language look like?
I would love to know if these tools came from some interesting research
once upon a time. I'd be grateful for any references that can be shared.
I'm also interested in hearing any wild ideas that folks might have, or
great ideas that fell by the wayside way back when.
Out of curiosity, how does one change one's "mood" when interacting with Frank?
Casey
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
karl ramberg
2013-11-03 11:18:31 UTC
Permalink
One issue with the instance development in Squeak is that it is quite
fragile. It is easy to pull the building blocks apart and it all falls down
like a house of cards.

It's currently hard to work on different parts and individually version
them independent of the rest of the system. All parts are versioned by the
whole project.

It is also quite hard to reuse separate parts and share is with others. Now
you must share a whole project and pull out the parts you want.

I look forward to using more rugged tools for instance programming/
creation :-)

Karl
Post by Alan Kay
It's worth noting that this was the scheme at PARC and was used heavily later in Etoys.
This is why Smalltalk has unlimited numbers of "Projects". Each one is a
persistant environment that serves both as a place to make things and as a
"page" of "desktop media".
There are no apps, only objects and any and all objects can be brought to
any project which will preserve them over time. This avoids the stovepiping
of apps. Dan Ingalls (in Fabrik) showed one UI and scheme to integrate the
objects, and George Bosworth's PARTS system showed a similar but slightly
different way.
Also there is no "presentation app" in Etoys, just an object that allows
projects to be put in any order -- and there can many many such orderings
all preserved -- and there is an object that will move from one project to
the next as you give your talk. "Builds" etc are all done via Etoy scripts.
This allows the full power of the system to be used for everything,
including presentations. You can imagine how appalled we were by the
appearance of Persuade and PowerPoint, etc.
Etc.
We thought we'd done away with both "operating systems" and with "apps"
but we'd used the wrong wood in our stakes -- the vampires came back in the
80s.
One of the interesting misunderstandings was that Apple and then MS didn't
really understand the universal viewing mechanism (MVC) so they thought
views with borders around them were "windows" and view without borders were
part of "desktop publishing", but in fact all were the same. The Xerox Star
confounded the problem by reverting to a single desktop and apps and missed
the real media possibilities.
They divided a unified media world into two regimes, neither of which are
very good for end-users.
Cheers,
Alan
------------------------------
*Sent:* Thursday, October 31, 2013 8:58 AM
*Subject:* Re: [fonc] Task management in a world without apps.
Instead of 'applications', you have objects you can manipulate (compose,
decompose, rearrange, etc.) in a common environment. The state of the
system, the construction of the objects, determines not only how they
appear but how they behave - i.e. how they influence and observe the world.
Task management is then simply rearranging objects: if you want to turn an
object 'off', you 'disconnect' part of the graph, or perhaps you flip a
switch that does the same thing under the hood.
This has very physical analogies. For example, there are at least two ways
to "task manage" a light: you could disconnect your lightbulb from its
socket, or you could flip a lightswitch, which opens a circuit.
There are a few interesting classes of objects, which might be described
as 'tools'. There are tools for your hand, like different paintbrushes in
Paint Shop. There are also tools for your eyes/senses, like a magnifying
glass, x-ray goggles, heads-up display, events notification, or language
translation. And there are tools that touch both aspects - like a
projectional editor, lenses. If we extend the user-model with concepts like
'inventory', and programmable tools for both hand and eye, those can serve
as another form of task management. When you're done painting, put down the
paintbrush.
This isn't really the same as switching between tasks. I.e. you can still
get event notifications on your heads-up-display while you're editing an
image. It's closer to controlling your computational environment by direct
manipulation of structure that is interpreted as code (aka live
programming).
Best,
Dave
On Thu, Oct 31, 2013 at 10:29 AM, Casey Ransberger <
A fun, but maybe idealistic idea: an "application" of a computer should
just be what one decides to do with it at the time.
I've been wondering how I might best switch between "tasks" (or really
things that aren't tasks too, like toys and documentaries and symphonies)
in a world that does away with most of the application level modality that
we got with the first Mac.
The dominant way of doing this with apps usually looks like either the OS
X dock or the Windows 95 taskbar. But if I wanted less shrink wrap and more
interoperability between the virtual things I'm interacting with on a
computer, without forcing me to "multitask" (read: do more than one thing
at once very badly,) what's my best possible interaction language look like?
I would love to know if these tools came from some interesting research
once upon a time. I'd be grateful for any references that can be shared.
I'm also interested in hearing any wild ideas that folks might have, or
great ideas that fell by the wayside way back when.
Out of curiosity, how does one change one's "mood" when interacting with Frank?
Casey
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Alan Kay
2013-11-03 12:11:15 UTC
Permalink
When I mention Smalltalk I always point to the 40 year ago past because it was then that the language and its implementation were significant. It was quite clear by the late 70s that many of the compromises (some of them wonderfully clever) that were made in order to run on the tiny machines of the day were not going to scale well.

It's worth noting that both the "radical" desire to "burn the disk packs", *and* the "sensible" desire to use "powers that are immediately available" make sense in their respective contexts. But we shouldn't confuse the two desires. I.e. if we were to attempt an ultra high level general purpose language today, we wouldn't use Squeak or any other Smalltalk as a model or a starting place.

Cheers,

Alan
Post by Casey Ransberger
________________________________
Sent: Sunday, November 3, 2013 3:18 AM
Subject: Re: [fonc] Task management in a world without apps.
One issue with the instance development in Squeak is that it is quite fragile. It is easy to pull the building blocks apart and it all falls down like a house of cards. 
It's currently hard to work on different parts and individually version them independent of the rest of the system. All parts are versioned by the whole project.
It is also quite hard to reuse separate parts and share is with others. Now you must share a whole project and pull out the parts you want.
I look forward to using more rugged tools for instance programming/ creation :-)
Karl
It's worth noting that this was the scheme at PARC and was used heavily later in Etoys.
Post by Alan Kay
This is why Smalltalk has unlimited numbers of "Projects". Each one is a persistant environment that serves both as a place to make things and as a "page" of "desktop media".
There are no apps, only objects and any and all objects can be brought to any project which will preserve them over time. This avoids the stovepiping of apps. Dan Ingalls (in Fabrik) showed one UI and scheme to integrate the objects, and George Bosworth's PARTS system showed a similar but slightly different way.
Also there is no "presentation app" in Etoys, just an object that allows projects to be put in any order -- and there can many many such orderings all preserved -- and there is an object that will move from one project to the next as you
give your talk. "Builds" etc are all done via Etoy scripts.
Post by Casey Ransberger
Post by Alan Kay
This allows the full power of the system to be used for everything, including presentations. You can imagine how appalled we were by the appearance of Persuade and PowerPoint, etc.
Etc.
We thought we'd done away with both "operating systems" and with "apps" but we'd used the wrong wood in our stakes -- the vampires came back in the 80s.
One of the interesting misunderstandings was that Apple and then MS didn't really understand the universal viewing mechanism (MVC) so they thought views with borders around them were "windows" and view without borders were part of "desktop publishing", but in fact all were the same. The Xerox Star confounded the problem by reverting to a single desktop and apps and missed the real media possibilities.
They divided a unified media world into two regimes, neither of which are very good for
end-users.
Post by Casey Ransberger
Post by Alan Kay
Cheers,
Alan
Post by Casey Ransberger
________________________________
Sent: Thursday, October 31, 2013 8:58 AM
Subject: Re: [fonc] Task management in a world without apps.
Instead of 'applications', you have objects you can manipulate (compose, decompose, rearrange, etc.) in a common environment. The state of the system, the construction of the objects, determines not only how they appear but how they behave - i.e. how they influence and observe the world. Task management is then simply rearranging objects: if you want to turn an object 'off', you 'disconnect' part of the graph, or perhaps you flip a switch that does the same thing under the hood. 
This has very physical analogies. For example, there are at least two ways to "task manage" a light: you could disconnect your lightbulb from its socket, or you could flip a lightswitch, which opens a circuit.
There are a few interesting classes of objects, which might be described as 'tools'. There are tools for your hand, like different paintbrushes in Paint Shop. There are also tools for your eyes/senses, like a magnifying glass, x-ray goggles, heads-up display, events notification, or language translation. And there are tools that touch both aspects - like a projectional editor, lenses. If we extend the user-model with concepts like 'inventory', and programmable tools for both hand and eye, those can serve as another form of task management. When you're done painting, put down the paintbrush.
This isn't really the same as switching between tasks. I.e. you can still get event notifications on your heads-up-display while you're editing an image. It's closer to controlling your computational environment by direct manipulation of structure that is interpreted as code (aka live programming).
Best,
Dave
A fun, but maybe idealistic idea: an "application" of a computer should just be what one decides to do with it at the time.
Post by Casey Ransberger
I've been wondering how I might best switch between "tasks" (or really things that aren't tasks too, like toys and documentaries and symphonies) in a world that does away with most of the application level modality that we got with the first Mac.
The dominant way of doing this with apps usually looks like either the OS X dock or the Windows 95 taskbar. But if I wanted less shrink wrap and more interoperability between the virtual things I'm interacting with on a computer, without forcing me to "multitask" (read: do more than one thing at once very badly,) what's my best possible interaction language look like?
I would love to know if these tools came from some interesting research once upon a time. I'd be grateful for any references that can be shared. I'm also interested in hearing any wild ideas that folks might have, or great ideas that fell by the wayside way back when.
Out of curiosity, how does one change one's "mood" when interacting with Frank?
Casey
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
karl ramberg
2013-11-04 21:47:46 UTC
Permalink
I guess as you build more and more capable systems it will be harder and
harder to burn the disk packs. You will redo your work several times over
to get virtually no further. But once you get so much ahead that your new
system outpace all the previous it will start a life of it's own.

It seems much of Smalltalk's advances have been distributed around, so many
systems are of much the same capability now. But it's hard to see another
system that are pushing far into the next level.

Well, that is probably mostly because I'm pretty oblivious.
But because the computer science field is also much more complex and the
problems to solve orders of magnitude harder to solve.

That said, Smalltalk 80 have a pretty nice readability and that counts for
a lot in my book :-)

Best regards,
Karl
Post by Alan Kay
When I mention Smalltalk I always point to the 40 year ago past because it
was then that the language and its implementation were significant. It was
quite clear by the late 70s that many of the compromises (some of them
wonderfully clever) that were made in order to run on the tiny machines of
the day were not going to scale well.
It's worth noting that both the "radical" desire to "burn the disk packs",
*and* the "sensible" desire to use "powers that are immediately available"
make sense in their respective contexts. But we shouldn't confuse the two
desires. I.e. if we were to attempt an ultra high level general purpose
language today, we wouldn't use Squeak or any other Smalltalk as a model or
a starting place.
Cheers,
Alan
------------------------------
*Sent:* Sunday, November 3, 2013 3:18 AM
*Subject:* Re: [fonc] Task management in a world without apps.
One issue with the instance development in Squeak is that it is quite
fragile. It is easy to pull the building blocks apart and it all falls down
like a house of cards.
It's currently hard to work on different parts and individually version
them independent of the rest of the system. All parts are versioned by the
whole project.
It is also quite hard to reuse separate parts and share is with others.
Now you must share a whole project and pull out the parts you want.
I look forward to using more rugged tools for instance programming/ creation :-)
Karl
It's worth noting that this was the scheme at PARC and was used heavily later in Etoys.
This is why Smalltalk has unlimited numbers of "Projects". Each one is a
persistant environment that serves both as a place to make things and as a
"page" of "desktop media".
There are no apps, only objects and any and all objects can be brought to
any project which will preserve them over time. This avoids the stovepiping
of apps. Dan Ingalls (in Fabrik) showed one UI and scheme to integrate the
objects, and George Bosworth's PARTS system showed a similar but slightly
different way.
Also there is no "presentation app" in Etoys, just an object that allows
projects to be put in any order -- and there can many many such orderings
all preserved -- and there is an object that will move from one project to
the next as you give your talk. "Builds" etc are all done via Etoy scripts.
This allows the full power of the system to be used for everything,
including presentations. You can imagine how appalled we were by the
appearance of Persuade and PowerPoint, etc.
Etc.
We thought we'd done away with both "operating systems" and with "apps"
but we'd used the wrong wood in our stakes -- the vampires came back in the
80s.
One of the interesting misunderstandings was that Apple and then MS didn't
really understand the universal viewing mechanism (MVC) so they thought
views with borders around them were "windows" and view without borders were
part of "desktop publishing", but in fact all were the same. The Xerox Star
confounded the problem by reverting to a single desktop and apps and missed
the real media possibilities.
They divided a unified media world into two regimes, neither of which are
very good for end-users.
Cheers,
Alan
------------------------------
*Sent:* Thursday, October 31, 2013 8:58 AM
*Subject:* Re: [fonc] Task management in a world without apps.
Instead of 'applications', you have objects you can manipulate (compose,
decompose, rearrange, etc.) in a common environment. The state of the
system, the construction of the objects, determines not only how they
appear but how they behave - i.e. how they influence and observe the world.
Task management is then simply rearranging objects: if you want to turn an
object 'off', you 'disconnect' part of the graph, or perhaps you flip a
switch that does the same thing under the hood.
This has very physical analogies. For example, there are at least two ways
to "task manage" a light: you could disconnect your lightbulb from its
socket, or you could flip a lightswitch, which opens a circuit.
There are a few interesting classes of objects, which might be described
as 'tools'. There are tools for your hand, like different paintbrushes in
Paint Shop. There are also tools for your eyes/senses, like a magnifying
glass, x-ray goggles, heads-up display, events notification, or language
translation. And there are tools that touch both aspects - like a
projectional editor, lenses. If we extend the user-model with concepts like
'inventory', and programmable tools for both hand and eye, those can serve
as another form of task management. When you're done painting, put down the
paintbrush.
This isn't really the same as switching between tasks. I.e. you can still
get event notifications on your heads-up-display while you're editing an
image. It's closer to controlling your computational environment by direct
manipulation of structure that is interpreted as code (aka live
programming).
Best,
Dave
On Thu, Oct 31, 2013 at 10:29 AM, Casey Ransberger <
A fun, but maybe idealistic idea: an "application" of a computer should
just be what one decides to do with it at the time.
I've been wondering how I might best switch between "tasks" (or really
things that aren't tasks too, like toys and documentaries and symphonies)
in a world that does away with most of the application level modality that
we got with the first Mac.
The dominant way of doing this with apps usually looks like either the OS
X dock or the Windows 95 taskbar. But if I wanted less shrink wrap and more
interoperability between the virtual things I'm interacting with on a
computer, without forcing me to "multitask" (read: do more than one thing
at once very badly,) what's my best possible interaction language look like?
I would love to know if these tools came from some interesting research
once upon a time. I'd be grateful for any references that can be shared.
I'm also interested in hearing any wild ideas that folks might have, or
great ideas that fell by the wayside way back when.
Out of curiosity, how does one change one's "mood" when interacting with Frank?
Casey
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Loup Vaillant-David
2013-11-05 00:00:29 UTC
Permalink
Post by Alan Kay
if we were to attempt an ultra high level general purpose language
today, we wouldn't use Squeak or any other Smalltalk as a model or a
starting place.
May I ask what would be an acceptable starting point? Maru, maybe?

Loup.
Alan Kay
2013-11-05 04:11:37 UTC
Permalink
Each to their own, but we have always started with 10-20 example cases that we'd like to be really "well fitted" and "nice", plus a few possible "powerful principles". I.e "expressiveness" is usually the main aim. If this seems to be promising then there are lots of ways to approach fast-enough implementation (including making new hardware or using FPGAs, etc.)

Cheers,

Alan
Post by Casey Ransberger
________________________________
Sent: Monday, November 4, 2013 4:00 PM
Subject: Modern General Purpose Programming Language (Was: Task management in a world without apps.)
Post by Alan Kay
if we were to attempt an ultra high level general purpose language
today, we wouldn't use Squeak or any other Smalltalk as a model or a
starting place.
May I ask what would be an acceptable starting point?  Maru, maybe?
Loup.
Miles Fidelman
2013-11-05 13:15:03 UTC
Permalink
Casey Ransberger casey.obrien.r at gmail.comwrites
<mailto:fonc%40vpri.org?Subject=Re%3A%20%5Bfonc%5D%20Task%20management%20in%20a%20world%20without%20apps.&In-Reply-To=%3CDD90A941-C94A-4F01-A013-6D838B0B2524%40gmail.com%3E>
Post by Casey Ransberger
A fun, but maybe idealistic idea: an "application" of a computer
should just be what one decides to do with it at the time.
I've been wondering how I might best switch between "tasks" (or really
things that aren't tasks too, like toys and documentaries and
symphonies) in a world that does away with most of the application
level modality that we got with the first Mac.
The dominant way of doing this with apps usually looks like either the
OS X dock or the Windows 95 taskbar. But if I wanted less shrink wrap
and more interoperability between the virtual things I'm interacting
with on a computer, without forcing me to "multitask" (read: do more
than one thing at once very badly,) what's my best possible
interaction language look like?
I would love to know if these tools came from some interesting
research once upon a time. I'd be grateful for any references that can
be shared. I'm also interested in hearing any wild ideas that folks
might have, or great ideas that fell by the wayside way back when.
For a short time, there was OpenDoc - which really would have turned the
application paradigm on its head. Everything you interacted with was an
object; with methods incorporated into it's "container." E.g., if you
were working on a "document," there was no notion of a word processor,
just the document with embedded methods for interacting with it.

Miles Fidelman
--
In theory, there is no difference between theory and practice. In
practice, there is. .... Yogi Berra
BGB
2013-11-05 18:22:21 UTC
Permalink
Post by Miles Fidelman
Casey Ransberger casey.obrien.r at gmail.comwrites
<mailto:fonc%40vpri.org?Subject=Re%3A%20%5Bfonc%5D%20Task%20management%20in%20a%20world%20without%20apps.&In-Reply-To=%3CDD90A941-C94A-4F01-A013-6D838B0B2524%40gmail.com%3E>
Post by Casey Ransberger
A fun, but maybe idealistic idea: an "application" of a computer
should just be what one decides to do with it at the time.
I've been wondering how I might best switch between "tasks" (or
really things that aren't tasks too, like toys and documentaries and
symphonies) in a world that does away with most of the application
level modality that we got with the first Mac.
The dominant way of doing this with apps usually looks like either
the OS X dock or the Windows 95 taskbar. But if I wanted less shrink
wrap and more interoperability between the virtual things I'm
interacting with on a computer, without forcing me to "multitask"
(read: do more than one thing at once very badly,) what's my best
possible interaction language look like?
I would love to know if these tools came from some interesting
research once upon a time. I'd be grateful for any references that
can be shared. I'm also interested in hearing any wild ideas that
folks might have, or great ideas that fell by the wayside way back when.
For a short time, there was OpenDoc - which really would have turned
the application paradigm on its head. Everything you interacted with
was an object; with methods incorporated into it's "container." E.g.,
if you were working on a "document," there was no notion of a word
processor, just the document with embedded methods for interacting
with it.
a while ago, I had started, but didn't finish writing (or at least to a
level I would want to send it) about the relationship between
object-based and dataflow-based approaches to modular systems (where in
both cases, the "application" could be largely dissolved in favor of
interacting components and "generic" UIs).

but, the line gets kind of fuzzy, as what people often call "OOP"
actually covers several distinct sets of methodologies, and people so
much often focus on lower-level aspects (class vs not-a-class,
inheritance trees, ...), that there is a tendency to overlook
higher-level aspects, like whether the system is composed of objects
interacting via passing messages using certain interfaces, or whether it
is working with a data-stream where the objects don't really interact at
all and rather produce and consume data in a set of shared representations.


then, there is the "bigger" issue from an architectural POV, namely, can
"App A access anything from within App B?" short of both developers each
having access to and the ability to hack on each-others' source code
(or, often, get the thing rebuilt from source sometimes).

so, we have some problems:
lack of shared functionality (often short of what has explicitly been
made into shared libraries or similar);
frequent inability to add new functionality to existing apps (or "UIs"),
short of having access to and ability to modify their source-code to
ones uses;
lots of software that is a PITA to get to rebuild from source (*1);
...

*1: especially in GNU land, where they pride themselves of freely
available source, but the ever present GNU Autoconf system has a problem:
it very often has a tendency not to work;
it is often annoyingly painful to get it to work when it has decided it
doesn't want to;
very often developers set some rather arbitrary restrictions on projects
build-probing, like "must have exactly this version of this library to
build", even when it will often still build and work with later (and
earlier) versions of the library;
...

it is sad, in premise, that hard-coded Visual Studio projects, and raw
Makefiles, are often easier to get to work when things don't go "just
right". well, that and one time recently managing to apparently get on
the bad side of some developers for a FOSS GPL project, by going and
building part of it using MSVC (for plugging some functionality into the
app), but in this case, it was the path of least effort (the other code
I was using with it was already being built with MSVC, and I couldn't
get the main project to rebuild from source via the "approved" routes
anyways, ...).

weirder yet, some of the better development experiences I have had have
been in developing extensions for closed-source commercial projects
(without any ability to see their source-code, or for that matter, even
worthwhile API documentation), which "should not be".


not that I don't think these problems are solvable, but maybe the
"spaghetti string mess" that is GNU-land at present isn't really an
ideal solution. like, there might be a need to address "general
architectural issues" (provide solid core APIs, ...), rather than just
daisy-chaining everything in a somewhat ad-hoc manner.


but, as an assertion:
with increasing modularity and ability to share functionality between
apps, and to extend preexisting apps with new functionality (via
plugin-like interfaces), potentially the significance of big monolithic
applications would go into decline (things become less app-centric and
more task-centric).

granted, it would also require an shift in focus:
rather than an application being simply a user of APIs and resources, it
would instead need to be a "provider" of an interface for other
components to reuse parts of its functionality and APIs, ideally with
some decoupling such that neither the component nor the application need
to be directly aware of each other.

so, the component exports a service;
the application uses the service;
the two seemingly "merge", using the functionality provided by one of
them, and the UI by the other.

sort of like codecs on Windows: you don't have to go write a plugin for
every app that uses media (or, worse, hack on their code), nor does
every media player or video-editing program have to be aware of every
possible codec or media container format, they seemingly, "just work",
you install the appropriate drivers and it is done.

the GNU-land answer more tends to be "we have FFmpeg / libavcodec and
VLC Media Player", then lots of stuff is built by building lots of
things on top of these, which isn't quite the same thing (you need to
deal with one or both to add a new codec, hacking code into and
rebuilding libavcodec, or modifying or plugging into VLC to add support,
...).


nevermind if writing things like codec drivers for Windows isn't such a
great experience either.

stuff "could be better" in any case...


or such...
Chris Warburton
2013-11-06 09:55:24 UTC
Permalink
Post by BGB
it is sad, in premise, that hard-coded Visual Studio projects, and raw
Makefiles, are often easier to get to work when things don't go "just
right". well, that and one time recently managing to apparently get on
the bad side of some developers for a FOSS GPL project, by going and
building part of it using MSVC (for plugging some functionality into
the app), but in this case, it was the path of least effort (the other
code I was using with it was already being built with MSVC, and I
couldn't get the main project to rebuild from source via the
"approved" routes anyways, ...).
weirder yet, some of the better development experiences I have had
have been in developing extensions for closed-source commercial
projects (without any ability to see their source-code, or for that
matter, even worthwhile API documentation), which "should not be".
This is probably an artifact of using Windows (I assume you're using
Windows, since you mention Windows-related programs). Unfortunately
building GNU-style stuff on Windows is usually an edge case; many *NIX
systems use source-level packages (ports, emerge, etc.) which forces
developers to make their stuff build on *NIX. If a GNU-style project
even works on Windows at all, most people will be grabbing a pre-built
binary (programs as executables, libraries as DLLs bundled with whatever
needs them).

It's probably a cultural thing too; you find GNU-style projects awkward
on your Microsoft OS, I find Microsoft-style projects awkward on my GNU
OS.
Post by BGB
rather than an application being simply a user of APIs and resources,
it would instead need to be a "provider" of an interface for other
components to reuse parts of its functionality and APIs, ideally with
some decoupling such that neither the component nor the application
need to be directly aware of each other.
Sounds a lot like Web applications. I've noticed many projects using
locally-hosted Web pages as their UI; especially with languages like
Haskell where processing HTTP streams and automatically generating
Javascript fits the language's style more closely than wrapping an OOP
UI toolkit like GTK.
Post by BGB
sort of like codecs on Windows: you don't have to go write a plugin
for every app that uses media (or, worse, hack on their code), nor
does every media player or video-editing program have to be aware of
every possible codec or media container format, they seemingly, "just
work", you install the appropriate drivers and it is done.
the GNU-land answer more tends to be "we have FFmpeg / libavcodec and
VLC Media Player", then lots of stuff is built by building lots of
things on top of these, which isn't quite the same thing (you need to
deal with one or both to add a new codec, hacking code into and
rebuilding libavcodec, or modifying or plugging into VLC to add
support, ...).
Windows tends to be "we have DirectFoo", then lots of stuff is built by
building lots of things on top of these ;) Note that there are projects
like GStreamer which are designed to be unified interfaces to
libavcodec/ffmpeg/etc; however, they seem to have limitations since
there are now meta-interfaces like Phonon which sit on top of
GStreamer/VLC/mplayer!

Reminds me of the datatypes system in AmigaOS, which was a mechanism for
handling arbitrary file types. The floppy disks/CDs attached to
magazines usually contained a bunch of these (eg. PNG, GIF, etc.) which
could be dragged into the Workbench:Datatypes drawer and existing
programs would magically be able to access files of those types.

Well, that was the theory; many applications ended up bypassing the
system, probably due to lowest-common-denominator effects (ie. generic
access to "images" provides very little control over each particular
format's features). I've only started programming since being on Linux,
so that's pure speculation on my part. Would serve as another example of
environment-provided features which become too limiting for
applications, just like filesystems' embedded metadata.

Cheers,
Chris
BGB
2013-11-09 05:06:44 UTC
Permalink
Post by Chris Warburton
Post by BGB
it is sad, in premise, that hard-coded Visual Studio projects, and raw
Makefiles, are often easier to get to work when things don't go "just
right". well, that and one time recently managing to apparently get on
the bad side of some developers for a FOSS GPL project, by going and
building part of it using MSVC (for plugging some functionality into
the app), but in this case, it was the path of least effort (the other
code I was using with it was already being built with MSVC, and I
couldn't get the main project to rebuild from source via the
"approved" routes anyways, ...).
weirder yet, some of the better development experiences I have had
have been in developing extensions for closed-source commercial
projects (without any ability to see their source-code, or for that
matter, even worthwhile API documentation), which "should not be".
This is probably an artifact of using Windows (I assume you're using
Windows, since you mention Windows-related programs). Unfortunately
building GNU-style stuff on Windows is usually an edge case; many *NIX
systems use source-level packages (ports, emerge, etc.) which forces
developers to make their stuff build on *NIX. If a GNU-style project
even works on Windows at all, most people will be grabbing a pre-built
binary (programs as executables, libraries as DLLs bundled with whatever
needs them).
It's probably a cultural thing too; you find GNU-style projects awkward
on your Microsoft OS, I find Microsoft-style projects awkward on my GNU
OS.
primarily Windows, but often getting Linux apps rebuilt on Linux doesn't
go entirely smoothly either, like trying to get VLC Media Player rebuilt
from source on Ubuntu, and having some difficulties mostly due to things
like library version issues.


granted, generally it does work a little better, like at least "most of
the time" the provided configure scripts wont blow up in ones' face, in
contrast to Windows and Cygwin or similar, where "most of the time"
things will fail to build (short of some fair amount of manual
intervention and hackery...).

and, a few times just finding it easier to go and port the things over
to building with MSVC (and "fixing up" any cases where it tries to use
GCC-specific functionality).
Post by Chris Warburton
Post by BGB
rather than an application being simply a user of APIs and resources,
it would instead need to be a "provider" of an interface for other
components to reuse parts of its functionality and APIs, ideally with
some decoupling such that neither the component nor the application
need to be directly aware of each other.
Sounds a lot like Web applications. I've noticed many projects using
locally-hosted Web pages as their UI; especially with languages like
Haskell where processing HTTP streams and automatically generating
Javascript fits the language's style more closely than wrapping an OOP
UI toolkit like GTK.
can't say, not done much web apps.

I was more thinking though of some of what (limited) experience I have
had with things like driver development.

like, there is some bit of hair in the mix (handling event messages
and/or dealing with COM+), but in some ways, a sort of more subtle
"elegance" in being able to see ones' code "just work" in a lot of
various apps without having to mess with anything in those apps to make
it work.


but, in general though, I try to write code to try to minimize
dependencies on "uncertain" code or functionality.

a few times I have guessed wrong, such as allowing my codec code to
directly depend on my projects' VFS and MM/GC system, but later on
ending up hacking over it a little to make the thing operate as a
self-contained codec driver in this case.

this doesn't necessarily mean shunning use of any functionality outside
of ones' control (in a "Not Invented Here" sense), but rather it
involves some level of "routing" such that functionality can enable or
disable itself depending on whether or not the required functionality is
available (for example, a Linux build of a program can't use
Windows-specific APIs, ..., but it sort of misses out if can't use them
when built for Windows simply because they are not also available for a
Linux build).
Post by Chris Warburton
Post by BGB
sort of like codecs on Windows: you don't have to go write a plugin
for every app that uses media (or, worse, hack on their code), nor
does every media player or video-editing program have to be aware of
every possible codec or media container format, they seemingly, "just
work", you install the appropriate drivers and it is done.
the GNU-land answer more tends to be "we have FFmpeg / libavcodec and
VLC Media Player", then lots of stuff is built by building lots of
things on top of these, which isn't quite the same thing (you need to
deal with one or both to add a new codec, hacking code into and
rebuilding libavcodec, or modifying or plugging into VLC to add
support, ...).
Windows tends to be "we have DirectFoo", then lots of stuff is built by
building lots of things on top of these ;) Note that there are projects
like GStreamer which are designed to be unified interfaces to
libavcodec/ffmpeg/etc; however, they seem to have limitations since
there are now meta-interfaces like Phonon which sit on top of
GStreamer/VLC/mplayer!
yes.

in some cases, the "tyranny" of APIs like the Win32 API and DirectX and
similar isn't entirely a bad thing, as it can help limit the
proliferation of meta-APIs by keeping most functionality "relatively"
close to the level of the baseline OS.

though, one problem (common with many wrapper APIs) is a tendency to
focus more on wrapping functionality, rather than on providing a good
"hub" architecture for things to plug into and export to, hence the
common need to wrap the wrappers to provide additional functionality.


practically, there will be limits though, for example, in my case, the
near complete absence of multimedia APIs (or formats) which support
things like layers or alpha channels, causing a need for an (admittedly
ad-hoc) collection of infrastructure for dealing with a lot of this
(still lacking any good infrastructure for glossing over the handling of
multi-layer video streams though, *).

and, when exporting the functionality to other APIs (VfW, DirectShow,
and VLC), only really simple single-layer video streams can be exported
(any extended functionality would require bypassing the API).


*: note multiple layers in a single video stream, not multiple streams
in a single file.
also, sometimes things can support features indurectly even if not
properly supported by the standard OS level APIs or file-formats, like
AVI files with subtitle streams and similar, and some codecs supporting
features which aren't formally supported by the APIs, ...
Post by Chris Warburton
Reminds me of the datatypes system in AmigaOS, which was a mechanism for
handling arbitrary file types. The floppy disks/CDs attached to
magazines usually contained a bunch of these (eg. PNG, GIF, etc.) which
could be dragged into the Workbench:Datatypes drawer and existing
programs would magically be able to access files of those types.
Well, that was the theory; many applications ended up bypassing the
system, probably due to lowest-common-denominator effects (ie. generic
access to "images" provides very little control over each particular
format's features). I've only started programming since being on Linux,
so that's pure speculation on my part. Would serve as another example of
environment-provided features which become too limiting for
applications, just like filesystems' embedded metadata.
yeah.

the capabilities of a central API are kind of an issue sometimes.

too few features can lead to limitations, but too many features can
burden writing applications or components, or in some cases hinder an
efficient implementation (for example, where only a small subset can be
implemented efficiently, but where one ends up with a polynomial
complexity factor due to the number of possible feature-combinations and
special-cases, potentially leading either to inefficient fallback cases,
or combinations of features which end up being widely unsupported).

in a few cases, this has had lasting effects, like where only a subset
end up widely implemented, and most of what is left over (in terms of
specified but not-widely-supported features) end up largely forgotten by
history.

John Carlson
2013-11-01 01:34:18 UTC
Permalink
Essentially a problem oriented window is what you want. In something like
Lively Kernel, this becomes a problem oriented widget.
Post by Casey Ransberger
A fun, but maybe idealistic idea: an "application" of a computer should
just be what one decides to do with it at the time.
I've been wondering how I might best switch between "tasks" (or really
things that aren't tasks too, like toys and documentaries and symphonies)
in a world that does away with most of the application level modality that
we got with the first Mac.
The dominant way of doing this with apps usually looks like either the OS
X dock or the Windows 95 taskbar. But if I wanted less shrink wrap and more
interoperability between the virtual things I'm interacting with on a
computer, without forcing me to "multitask" (read: do more than one thing
at once very badly,) what's my best possible interaction language look like?
I would love to know if these tools came from some interesting research
once upon a time. I'd be grateful for any references that can be shared.
I'm also interested in hearing any wild ideas that folks might have, or
great ideas that fell by the wayside way back when.
Out of curiosity, how does one change one's "mood" when interacting with Frank?
Casey
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Loading...