Discussion:
VR "for the rest of us" (was Re: [fonc] Re: SecondPlace, QwaqLife or TeleSim? Open ended, comments welcome)
Casey Ransberger
2011-08-09 11:50:54 UTC
Permalink
Inline and abridged... and rather long anyhow. I *really* like some of the
ideas that are getting tossed around.
I almost missed this thread. I'm also hunting that grail. VR for consumers
that isn't lame. CC'd FONC because I think this is actually relevant to that
conversation.
My feeling is, and I may be wrong, that the problems with Second Life are
[sorry in advance, I mostly ended up running off in an unrelated direction,
but maybe it could still be interesting].
You're fine, I do it all the time:)

IMO, probably better (than centralized servers) is to have independent
world-servers which run alongside a traditional web-server (such as Apache
or similar).
This appears more or less to be the way OpenQwaq works. I'm pretty sure that
I haven't fully comprehended everything the server does and how that relates
to the more familiar (to me) client, though. I note that the models and such
seem to live on the server, and then get sent (synced?) to the client.
one can jump to a server using its URL, pull down its local content via
HTTP, and connect to a server which manages the shared VR world, ...
Ah, you're talking about running in a web browser? Yeah, that will probably
happen, but the web browser strikes me as a rather poor choice of life
support system for a 3D multimedia collaboration and learning environment at
least as of today... OTOH I guess it solves the problem of not being able to
deploy (e.g.) GPL'd code on platforms like iOS. I should say that I'm a huge
fan of things like Clamato and Lively Kernel, but I'm not sure the WebGL
thing is ready for prime time, and I'm not sure how something like e.g.
Croquet will translate at this point in time. I also don't have a Croquet
implemented in Javascript lying around anywhere, and it's not exactly a
small amount of work to implement the basis. I don't even understand how all
of the parts work or interact yet...
a partial issue though becomes how much client-specific content to allow,
for example, if clients use their own avatars (which are bounced along using
the webserver to distribute them to anyone who sees them), and a persons'
avatar is derived from copyrighted material, there is always a risk that
some jerkface lawyers may try to sue the person running the server, or the
author of the VR software, with copyright infringement (unless of course one
uses the usual sets of legal disclaimers and "terms of use" agreements and
similar).
Heh, yes. Fortunately there are places one can go to purchase assets which
can then be used under commercially compatible licenses... to be honest,
though, the avatar I've been testing with is *cough* Tron. Found it on the
web and couldn't really resist. Got to take him out of there before I can
deploy anything, I think, but I Am Not A Lawyer, so I can't say that I
actually know, and like most folks, I'm going to play it safe... what I do
know is that this is slightly embarrassing :O

Working on an original protagonist/avatar for my "game" but she's not quite
done yet. It's all dialed in but the clothes aren't right yet. Having to
learn to use this pile of expensive 3D animation software as I go... I
really wish I could just draw everything using a pencil and then use a
lightbox to transfer the keyframes to cell and paint, but I don't know how
to make hand drawn animation work in 3D. This is actually why I was curious
about the availability of the sources to SketchPad, because that constraints
in 3D idea seems to underly the automated inbetweening that goes on nowadays
and you could do stuff in 3D using a light pen with SketchPad, which seems
better than what I have now in a lot of ways.
allowing user-created worlds on a shared server (sort of like a Wiki) poses
similar problems.
the temptation for people to be lazy and use copyrighted images/music/...
in their worlds is fairly large, and nearly any new technology is a major
bait for opportunistic lawsuit-crazy lawyers...
So it *seems* like the way most businesses deal with this is by taking UGC
down without quarter whenever someone complains. I'll probably end up having
to do something like this. It's still painful because one then needs to
employ people to actually handle that every day. I don't know, maybe there's
some way to use community policing to accomplish this.

In my view, though, if it happens, it isn't the worst problem in the world
to have. It means someone noticed that your product/service or what have you
exists! And if it was fatal, I don't think YouTube would still be on the
internet. In fact all of that "bad press" probably helped YouTube get
traction.

yep, and there is a question of what exact purposes these 3D worlds serve vs
more traditional systems, such as good old web-pages.
I think being able to point at things and see by the eyes and the angle of
the head what people are looking at (shared attention) are probably pretty
powerful in general. We have a very disjoint communication experience... I
have to keep track of phone calls, video, the (grumble) stream of mostly
useless information which comes my way from the social networking site that
my friends have increasingly replaced other, less noisy communication
mechanisms with.

And what kind of error message is "Message is too long." when I didn't even
write enough to constitute a short story? It's hard to actually make a whole
point with the stuff people are using now if you like to supply supporting
arguments.

The best way to have a conversation with someone is in person, but with my
friend in Florida, this gets expensive quickly, etc... and I won't be able
to visit my friend in Argentina very much at all, much less introduce him to
friends I've made in Seattle in person real soon, I'd have to save to do
that.

I think when the 3D displays get cheap, natural user interfaces become
common, and computer animation starts to exit the "uncanny valley" this
stuff will start to look like a pretty good idea. The consumers I've talked
to pretty much tugged my coat and said adult the equivalent of "Momma, want"
but I haven't convinced any business folks that trying to sell it is a good
idea yet, and I have a feeling that's going to be pretty hard.
games is a major application area for 3D, but the more open-ended world
that is non-game systems is a much bigger problems, and the relative merits
of 3D are much less obvious.
Yeah and a new medium is like... it's like pitching that an investor, who
really wants to invest in a nice painting, should instead invest in a new
kind of canvas. Ends up being a hard sell. I looked at the list of
universals and settled on "play" as the best bet, so I focused on ways I
might build a game of some sort in there. If I can get the tech out there,
people will pretty rapidly figure out that it isn't really a game, but
merely contains one. And then people will likely figure out what it's for on
their own, bit by bit. This is my thinking, anyway, and I may well be wrong.

a partial issue at the time though is potentially the reasonably high costs
of producing decent-quality 3D content (models, maps, ...) in contrast to
most other content.
When I realized that I was going to need the paraphernalia of 3D gaming,
there went my savings... and a lot of my time, since I was the only person I
knew who'd done any animation.
the industry-standard tools are typically expensive, have a steep learning
curve, and still leave content production a rather long and tedious process
(it is, in contrast, much faster and easier to produce spiffy-looking 2D
graphics artwork, or for that matter to edit documents in a WYSIWYG editor).
I'd even go so far as to say that a lot of this stuff is still rather
counterintuitive, but I know mileage probably varies. And yes, my kingdom
for a way to do 3D animation that resembles what I did with my pencil, my
fountain pens, the cells, the paint, etc. It ends up being more like a
combination of Python, puppeteering and sculpture nowadays, and I have
confidence that I can rock the Python, even though I've never used it, but
the other two are things I don't have previous experience with.

The automatic (I can only assume constraints based under the hood, but it
usually doesn't come with source code except for Blender, and the UI on that
one seems strongly resistant to the ways that I want to interact with it out
the gate) inbetweening often does weird, grotesque, physically impossible
contortions, causing me to have to go back and do my own inbetweens manually
on a regular basis. I've been using Blender mostly to convert file formats,
and commercial tools to do the animation work, because I simply didn't have
time to learn to use Blender effectively. The commercial stuff seems a
little less... alien, but it's also not terribly easy to learn to use. For
me.

Also, a hand drawn character looks less... creepy than the current state of
the art puppet, even if the puppet is more realistic now. Uncanny valley. In
a long shot I can make the spitting image of a real life human being
surrounded by beautiful, lush, procedural/fractal terrain, but in the close
up, it just makes me want to cry a little and call my mom.

And, the other issue I have is spending hours to see what the final render
will look like for a single frame is isn't economical, so I have to work
with images that don't look anything like the final product most of the
time. This is a pretty big problem for me, and the only solutions I know
about are a) buy or rent a compute cluster, and b) wait a long time. I can't
currently afford to do either, so I have to work with preview renders in the
wrong resolution, the shading minimized, and basically clothes that look
completely tattered and don't even move right until I'm ready to cross my
fingers and pray that the final render won't come out completely wrong for
some reason I couldn't perceive in the early version: this is a lot like
that awful "get coffee and maybe lunch while my code fails to compile"
thing.

also, there is also the general problem of a lack of non-suck free DCC
tools.
yes, I have my own 3D DCC tools, but sadly, they are not exactly non-suck
either...
Really? That's just so cool. Right now I feel like the caveman in 2001 who
figures out he can use the one thing to smash the other thing and gets
really excited about ways this might help him eat after he has an encounter
with the monolith. 3D is a tough nut to crack.
another problem at the present time is the general lack of freely-available
3D artwork, meaning much content production has to start from the ground-up,
from basic cubes and cylinders (again, this may have something to do with
the present sad state of DCC tools).
+1 and we know what the problem is too, it's still too expensive for most
people to learn, do and give away.
Minecraft has been running with an honor system for awhile now, and
people just don't seem to mess with each other as much there.
yes, but a lot may also be that there is no centralized Minecraft world,
but instead most servers are run on an individual basis and only admit a
limited number of players.
That's an interesting point. I'm going to run in a different direction on
this one, though. There's a service called freeshell which is about free
UNIX shells. They had some trouble with abuse once, and so they voted to
change the rules slightly, so that in order to get email you had to make a
$1 donation (this got rid of almost all of the spam) and to get a more fully
featured shell account, you had to pay... I think my lifetime ARPA
membership cost me like $30 if I remember? This got rid of more
"sophisticated" forms of abuse. To set up certain kinds of services requires
greater contributions. This seems to have worked for freeshell, and may also
work for immersive technologies as well. Note that e.g. Second Life offers
free accounts...
thus, the target for destructive behaviors or vandalism is spread very thin
(people are far less prone to try to vandalize peoples' personally-run
servers).
This is a good point, but it may also be that people are playing Minecraft
with more people they know in real life, which is also a little bit
interesting, no?

but, in some ways, I think Minecraft represents something "fundamental", but
I don't really know what it is. in many ways, it has created something thus
far reasonably unique in the world of gaming.
Well, so it does two things really well, it recognizes that people of all
ages can stack up blocks shaped like things in the real world and then knock
them over for fun. It also includes cellular automata that users can use to
figure out how to make complex dynamic behaviors without having to
necessarily be taught (though programming with Redstone is not the first
thing in the game people figure out... you have to figure out how to survive
and dig down to the bottom of the world in order to obtain it, and
programming is relatively hard.)
I also find voxels interesting, but there is an uncertainty where the
future of 3D lies (moving on from the present system of polygonal geometry).
voxels are one option, but not the only option.
I like fractals. But it ends up being a mix in all likelihood. Voxels are
great for doing clouds at a distance, etc. But up close clouds made out of
little cubes aren't very convincing. Minecraft overcomes this by making low
resolution textures and huge voxels a kind of fashion statement.
in my case, I build partly on a different system: CSG.
Had to google that and it showed up below the fold. You mean this, is that
correct?

http://en.wikipedia.org/wiki/Constructive_solid_geometry
although CSG is not exactly new either, people are far from using CSG to
its full ability (like voxels, objects built from CSG are "solid", and so it
is possible to cut/modify/... CSG-based objects).
also, the initial up-front cost of CSG (memory, rendering overhead, ...) is
much lower than with voxels (and, CSG can usually be fairly easily converted
into polygonal geometry for rendering/...). so, when a piece of CSG geometry
is altered, its polygonal geometry can be fairly easily rebuilt (unlike with
a mesh model, where there is nothing "in" the model beyond its external
geometry).
a downside of CSG vs voxels is that voxels have all of their complexity
up-front, so however the scene is changed the voxels remain approximately
the same cost. however, supporting alteration of CSG-based geometry steadily
increases the amount of internal geometry (creating new primitives and
cutting holes with negative primitives, ...). so, a Minecraft-like game
world couldn't be so easily created with CSG.
I have before had ideas for trying to combine CSG and voxels, but this is
itself a complex problem area.
even more compelling would be if both could be combined with FEM and
similar, allowing for structures which could be built initially from CSG
primitives, broken apart or modified at a fine level of detail (as with
voxels), and subject to mechanical forces (bending, flexing, fracturing
under strain, shearing, ...).
It's so much fun being an eternal beginner! Okay so now you're talking about
this, right?

http://en.wikipedia.org/wiki/Finite_element_method
It sounds fantastic?
how to implement it?...
You got me.
how to make it able to be used at *any* real scale on current HW
(voxels+FEM is not exactly a lightweight combination).
What if the "simulation overhead" could be distributed to every machine
currently participating? One might even apply an idea like ocular occlusion
in order to avoid simulating things that no one is present to perceive at
the cost of some realism (if a tree falls in the forest, and there's no one
there to compute it, it can't affect the butterfly that flaps its wings and
makes it rain in Singapore, to mix some metaphors badly.)
then again, there may be a partial way around it: the system is built
top-down from CSG.
at the high-level, it is just CSG primitives, and this is how the world is
initially built;
CSG geometry may be animated/... through more traditional means (treated as
polygonal geometry and subjected to weighted vertex transformations, ...);
as-needed, the geometry may be "decomposed" into analogous voxel geometry
(if done well, the transformation can be handled a single primitive at a
time and in a reasonably transparent manner).
and, if most of the world remains as CSG, then the overall cost can be kept
lower, and in regions where there is severe alteration, it transforms into
voxels.
So my fractal planet takes some voodoo that I don't yet understand in order
to be converted to a particular level of detail (the mesh and map) that I
can use with conventional 3D applications, and I've also been thinking that
multiple representations for different kinds of graphics and physics related
computations might be an interesting line of inquiry, but sadly I'm
presently missing math that I need to envision what that looks like. It's
nice to see someone else who understands this stuff better than I do
thinking along these lines, especially since I currently have no plan for
what I'm going to do when I find out that a real time physics engine is what
I actually need, and I understand that these are not small or easy to build
either based on the bit of web searching that I was able to do.

an example of this would be, say, one creates a big sphere of "sand" in the
sky, and it proceeds to fall due to gravity (at this point, the engine sees
a plain sphere). upon contact with the ground, high-forces (exceeding
certain defined constants) cause the sphere to break apart into voxels,
which are then subjected to internal physics (moving forwards at their prior
speeds), and then stopping once there is nowhere for them to go.
potentially, the engine leaves this as-is, or may begin trying to
"recompress" these regions of voxels back into CSG geometry where possible
(say, a big cube of voxels all of the same size is replaced with a "cube"
primitive mapped to the given material type).
Yep, that sounds really cool, and my gut says reducing to primitives
wherever possible might help make performance of higher resolution
volumetric simulations more feasible? Although I suppose if all you have is
voxels, then you only have a single primitive, and number of primitives in
play is supposed to affect render time? *Sigh* need more math.
and, from the point of view of the user, they just say a big ball of sand
fall from the sky and splat on the ground, leaving a sand-pile... (itself
subject to being readily deformed, say the user shoots at it and puffs of
sand fly off and pile up elsewhere, and ones' weapons deform the sand pile
and leave their bullets embedded in the sand, ...).
That's what you want yeah. Stuff that behaves like real stuff:)
an analogy would be the current partial integration between bitmap and
vector graphics, but what if this were 3D?...
granted, all this is mostly speculation, and not really something I can
pursue at present, and it may still be some years before this sort of thing
can be applied in real-time to large-scale game-worlds, in contrast to the
reasonably small scales typically seen in most voxel+FEM+physics examples
thus far).
Imagine something on the scale of an MMORPG and think about ways you can
distribute the math across as many machines as possible. Is what I've been
thinking for doing simulation stuff, anyway. I'm not sure this is a real
advantage but I bet you could at least get yourself a lot of computing
power.
also, the comparably lower costs of plain CSG and rigid-body physics make
them currently a much more attractive target, as these work much better on
current HW.
And with most real time physics engines, objects have a habit of randomly
exploding due to the lack of accuracy in the simulation, which invariably
means having to manually install dampening, which can also cause objects to
implode, right? I read a paper about that somewhere when I first looked into
doing physics. There was just no way I was paying for a commercial engine,
etc... and then OpenQwaq ended up being released, which looked more like
what I wanted out of the box than the "game engines."
OTOH, Minecraft works as well as it does because its voxels are very low
density and there are almost no large-scale physics (it is not possible, for
example, to build a large structure, and have it break loose and fall over,
...).
Right, and this is both an artistic constraint that produces interesting
art, and probably a real limitation to the kinds of art that one can create
in Minecraft.
imagine if, for example, one could build some large electronic device in a
Minecraft like system, print it out, and have a real and working piece of
hardware...
Uh... heh. I wanted to do something like this with my hardware project, but
I looked at what it takes to build an ALU with Redstone and thought "yeah
this thing really needs turtles." Which is what got me thinking about Logo.
it could very well be possible given the existence of conductive and
semi-conductive polymers, ...
I think yeah just the FPGAs will be good enough for now.

or such...
I'll spend some time reading these. Thank you.
http://en.wikipedia.org/wiki/Voxel
http://en.wikipedia.org/wiki/Constructive_solid_geometry
http://en.wikipedia.org/wiki/Finite_element_method
http://en.wikipedia.org/wiki/Computational_fluid_dynamics
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
Casey Ransberger
David Barbour
2011-08-09 15:37:25 UTC
Permalink
if clients use their own avatars (which are bounced along using the
webserver to distribute them to anyone who sees them), and a persons' avatar
is derived from copyrighted material, there is always a risk that some
jerkface lawyers may try to sue the person running the server, or the author
of the VR software, with copyright infringement (unless of course one uses
the usual sets of legal disclaimers and "terms of use" agreements and
similar).
I can think of two more problems. First, there will undoubtedly be avatars
shaped like giant billboards, dicks, and other objectionable material.
Unlike signatures on bulletin boards, this will be a lot harder to police
since avatars move around.

Second, we probably want the ability to stylize avatars as they move between
worlds. For example, some worlds may prefer a cell-shaded art style. When I
was pursuing this back in 2003-4, I was looking into possibilities such as
CSS for 3D - not just for avatars, but for the world itself, so that people
could support proper world mashups.

My ideas there were more along the lines of providing a sort of 'DNA' for
the avatars and 3D worlds, describing them in a common ontology, with
variations from a norm (male, female; height, body-shape; crooked-nose, snub
nose; etc.) and sometimes tweaks or non-standard extensions per world. This
would allow developers of the world to prevent 'literal' dicks from entering
their world. It would also allow people to become non-humans (e.g. werewolf,
vampire, or orc, ... or dick) when they enter certain worlds... i.e. to take
on various roles in common games.
games is a major application area for 3D, but the more open-ended world
that is non-game systems is a much bigger problems, and the relative merits
of 3D are much less obvious.
Yeah, 3D tends to be rather sparse of informational content. Today, I'm
interested in possibility of augmented reality... e.g. look through your
Tablet's video camera, and see a mixed camera/3D rendering of the scene.

A few pictures of a printer in context, along with meta-data about location
and network address, and we might be able to drag and drop documents onto a
'visible' printer in a 3D space.

There are a lot more privacy issues, of course, with augmented reality -
i.e. keeping people out of our homes and businesses unless they belong
there.
David Barbour
2011-08-09 15:47:13 UTC
Permalink
On Mon, Aug 8, 2011 at 6:55 PM, Casey Ransberger
2. Their entire business model ended up being a cultural toxin. Free
accounts mean spam and griefing/trolling/abuse. A profit motive for users
seemed like a good idea at the outset, as it's about the most marketable
universal out there, but it seems that DRM+UGC = red light district, real
estate, fashion, and a handful of enterprise applications which would
probably be served at least as well by Teleplace. I think one ultimately
wants user generated content, but I'm not sure what the right way to do it
is. One might read a book about Logo:)
I think any solution will need to accommodate porn, or it simply won't be
accepted. The idea should be, instead, to keep it from infecting everything
else and allow parents to protect their children.

My own interest, when I was pursuing this in 2003-4, was scalable
composition of federated worlds.

Today, I'm somewhat interested in 3D as an abstract space for layout of
information. For example, we can have a sort of XSLT or XQuery generating 3D
content, and thus see the same 'world' with many different views. This could
possibly solve the problems 3D has with information density. We'd get a 3D
world where the only real content is 'information', and the layout of that
information is up to the client.
Steve Wart
2011-08-09 18:09:35 UTC
Permalink
Post by David Barbour
I think any solution will need to accommodate porn, or it simply won't be
accepted. The idea should be, instead, to keep it from infecting everything
else and allow parents to protect their children.
3D design is extraordinarily expensive to develop properly - the
adult-oriented free-for-all in Second Life failed because it didn't scale
and there was no revenue model.

However some 3D virtual worlds are extraordinarily successful. World of
Warcraft, Minecraft and Roblox are some of my favourite examples.

And also note the lack of porn (although WoW has a high level of titillation
it also has been very successful in attracting women).
Post by David Barbour
My own interest, when I was pursuing this in 2003-4, was scalable
composition of federated worlds.
It would have been good if some of the ideas that SL and others were
pursuing at the time took off. The original concept of VRML as a standard in
the hypertext model still makes sense to me, but the gaming platforms seem
to prefer the silo model.

The Teatime model seems promising but I confess I still have a hard time
getting my head around it. There are papers I need to read again, but I
found myself disagreeing with some of the assumptions and when that happens
I usually remain stuck with my preconceived notions.

Despite its commercial nature Minecraft seems very open and easy to adapt.
Interestingly this implementation does a lot more to show that Java is "fast
enough" for real-time 3D environments than Croquet was able to with Squeak.
Croquet always felt awkward to me, partly it was performance, but it was
also because some of the primitives were too primitive.

While Croquet allows the arbitrary import of geometric meshes, many other
important complex graphical and physical characteristics are completely
unsupported. Minecraft limits its primitives to simple blocks. While
counter-intuitive, they provide a useful abstraction that simplifies the
introduction of physics, lighting models and particle effects.
Post by David Barbour
Today, I'm somewhat interested in 3D as an abstract space for layout of
information. For example, we can have a sort of XSLT or XQuery generating 3D
content, and thus see the same 'world' with many different views. This could
possibly solve the problems 3D has with information density. We'd get a 3D
world where the only real content is 'information', and the layout of that
information is up to the client.
Field is an exciting tool for visualization:
http://openendedgroup.com/field- it's very Smalltalk-like with an
extremely capable graphics library.

Regards,
Steve
Casey Ransberger
2011-08-09 19:30:32 UTC
Permalink
Cut it down to what I'm responding too, and inline.
Post by Steve Wart
Despite its commercial nature Minecraft seems very open and easy to adapt.
Interestingly this implementation does a lot more to show that Java is "fast
enough" for real-time 3D environments than Croquet was able to with Squeak.
Croquet always felt awkward to me, partly it was performance, but it was
also because some of the primitives were too primitive.
Have you checked out OpenQwaq? Runs on Cog. I have a feeling if I ran the
server on a different computer, rather than in VMWare on the same modest
hardware, performance would be a non-issue unless I allowed extremely
complex meshes or high-rez textures in. It's even totally acceptable and
usable even the way I'm currently running it, which is in a relatively
resource starved way. It chunks just a wee bit from time to time. I've been
really impressed with the performance so far. It would not, in any previous
year, have occurred to me to run an application that rendered 3D graphics
alongside an application that virtualized a big old enterprise operating
system at the same time on the same machine, but here I am doing it:)
Post by Steve Wart
Regards,
Steve
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
Casey Ransberger
David Barbour
2011-08-09 20:44:43 UTC
Permalink
Post by Steve Wart
3D design is extraordinarily expensive to develop properly
That is not an essential property of 3D design. We could have an ontology /
'markup language' just for building and animating avatars, similar to
dressing up a doll, if we want to make one. And a modular ontology for
buildings (including concepts such as crenelations and gargoyles). And
another for environments. Etc. Given a suitably modular meta-language, we
can even have dedicated languages for describing zombies.

I see the impoverished languages of today as an opportunity. For
accessibility reasons - e.g. desktop vs. iPhone access to a world - it is
preferable that we develop in these high-level ontologies anyway.

My own vague interest has steered me towards modular, reusable, multi-player
interactive fiction - with a lot of inspiration from the Inform 7 language
[1]. I have a bunch of half-formed designs from my earlier work on the
subject, and my efforts in language design.
Post by Steve Wart
And also note the lack of porn (although WoW has a high level of
titillation it also has been very successful in attracting women).
Lol. Pornography is a human trait with an ancient and ignoble history, even
if male dominated. I once watched a rather funny (but somewhat perverted)
video called 'Ballad of the Sex Junkie' developed in WoW. It's NSFW, but is
tame enough for Youtube.

Anyhow, I'm speaking at the federated world level. It would be silly to deny
that those red-lights districts will exist. This rule is the same for all
computer security: you cannot protect against a threat by ignoring it! I
prefer soft security, wherever possible, and this means recognizing and
accommodating threats in order to gain some control of them. By recognizing
red lights districts, and the inevitable fallout (such as naked avatars
waltzing through worlds), we can isolate them (e.g. by ensuring that the
avatar has suitable clothing upon entering a 'no shirt no shoes no service'
world).
Post by Steve Wart
The original concept of VRML as a standard in the hypertext model still
makes sense to me, but the gaming platforms seem to prefer the silo model.
VRML is an awfully low-level ontology for building 3D models! I would
suggest that this is part of *why* we favor the silo model.

Think about what it would take to build designs that let us achieve
something similar to CSS for 3D and avatar animation. Separation of artistic
rendition (presentation) from content is important. Anything short of that
is ultimately unsuitable for world mashups! Working with cones and boxes is
not the right level for this.

I think we really do need an ontology for architecture, avatars,
environments, etc. as a common foundation in the world.
Post by Steve Wart
The Teatime model seems promising
Teatime protocol is unscalable and insecure. It is suitable for LANs where
you trust the participants, but would die a slow, choking death if faced
with 'flash crowds', 'script kiddies', and their like. No variation on
Teatime will ever work at scale. Transactions scale poorly and have plenty
of flaws [2]

But there are some lessons you can take away from Teatime. Use of temporal
semantics is a suitable basis for consistency even without transactions - we
can tame this with a more commutative/idempotent model and *eventual
consistency*. Explicit delay is an effective approach to achieve near
wall-clock determinism in the face of distribution latencies (e.g. a signal
propagates to multiple clients, but triggers at some specific time in the
future).

I have developed a very simple and effective programming model - Reactive
Demand Programming - for solving these and related concerns [3]. One might
think of RDP as a fusion of eventless FRP and OOP - i.e. OOP where messages
and responses are replaced by continuous control signals, and state is
primarily replaced by continuous integrals. RDP is, by no small margin, the
most promising model for developing modular, federated, distributed command
and control systems, augmented reality systems, and 3D worlds.
Post by Steve Wart
Croquet always felt awkward to me, partly it was performance, but it was
also because some of the primitives were too primitive.
I agree that this is a problem. VRML is a problem for the same reason - i.e.
it is not clear what the physics should be, nor how we should recharacterize
for a different artistic style, and so on.

Regards,

Dave

[1] http://inform7.com/
[2] http://awelonblue.wordpress.com/2011/07/05/transaction-tribulation/
[3] http://awelonblue.wordpress.com/2011/05/21/comparing-frp-to-rdp/
BGB
2011-08-09 22:40:16 UTC
Permalink
Post by Steve Wart
3D design is extraordinarily expensive to develop properly
That is not an essential property of 3D design. We could have an
ontology / 'markup language' just for building and animating avatars,
similar to dressing up a doll, if we want to make one. And a modular
ontology for buildings (including concepts such as crenelations and
gargoyles). And another for environments. Etc. Given a suitably
modular meta-language, we can even have dedicated languages for
describing zombies.
I see the impoverished languages of today as an opportunity. For
accessibility reasons - e.g. desktop vs. iPhone access to a world - it
is preferable that we develop in these high-level ontologies anyway.
My own vague interest has steered me towards modular, reusable,
multi-player interactive fiction - with a lot of inspiration from the
Inform 7 language [1]. I have a bunch of half-formed designs from my
earlier work on the subject, and my efforts in language design.
yes, although sadly existing technology and tools have done a terrible
job at this, and are still mostly at the level of:
create a cube;
stretch it to about the right size;
put a "building exterior" texture or similar on it;
...;
call it done.

or, one wants to build a building, and so resorts to endless geometric
fiddling (placing/sizing/texturing cubes to make walls/doors, import a
chair model and copy/paste it a crapload of times, ...).

yes, granted, a few programs have procedural modeling features, ...
Post by Steve Wart
And also note the lack of porn (although WoW has a high level of
titillation it also has been very successful in attracting women).
Lol. Pornography is a human trait with an ancient and ignoble history,
even if male dominated. I once watched a rather funny (but somewhat
perverted) video called 'Ballad of the Sex Junkie' developed in WoW.
It's NSFW, but is tame enough for Youtube.
Anyhow, I'm speaking at the federated world level. It would be silly
to deny that those red-lights districts will exist. This rule is the
same for all computer security: you cannot protect against a threat by
ignoring it! I prefer soft security, wherever possible, and this means
recognizing and accommodating threats in order to gain some control of
them. By recognizing red lights districts, and the inevitable fallout
(such as naked avatars waltzing through worlds), we can isolate them
(e.g. by ensuring that the avatar has suitable clothing upon entering
a 'no shirt no shoes no service' world).
possibly, if done more like the existing web, then a person will have
different user accounts and different avatars for different servers.

transferring from one location to another, or going to favorite places,
may then inevitably involve some number login screens...
Post by Steve Wart
The original concept of VRML as a standard in the hypertext model
still makes sense to me, but the gaming platforms seem to prefer
the silo model.
VRML is an awfully low-level ontology for building 3D models! I would
suggest that this is part of /why/ we favor the silo model.
VRML also looked like a mishmash of things that would not normally go
together in game data files.

in many game engines, most of the game contents are spread across a
large number of different files, each format typically fairly
specialized, and integrated into a single combined world.

VRML seems to try to be more like HTML, and express the entire world
structure in a single file.
IMO, this is not a terribly great approach.


granted, I hold a similar complaint against Collada as well (although it
sees the world more from the POV of a traditional 3D modeling app).
Post by Steve Wart
Think about what it would take to build designs that let us achieve
something similar to CSS for 3D and avatar animation. Separation of
artistic rendition (presentation) from content is important. Anything
short of that is ultimately unsuitable for world mashups! Working with
cones and boxes is not the right level for this.
yep.

ideally, we should probably be working with higher-level "entities"
instead of lower-level geometry.

like, say, one goes about defining an entity type, allowing for certain
input parameters, ...

then, later, a piece of code may import the entity.

entity {
classname="someapp/my_entity_type"
origin="..."
...
}


Quake-series engines have generally done similar for most "higher-level"
entities (excluding basic map geometry).

to some extent (and with a different syntax) Valve is already doing
something vaguely similar with entities which may also import map
geometry (one can do things like, say, import premade world objects in
Hammer Editor, ...).

applying this at a larger scale may make some sense.


taken further, it could mean the elimination of "brushes" as
traditionally understood in the "map" sense, with brushes essentially
becoming entities as well (again, valve has partially done this at the
syntax level, abolishing the older brush-definition syntax).

this could mean though, instead of creating a brush from faces, one
could be like:
primitive {
classname="primitives/cube"
origin="..."
mins="..."
maxs="..."
texture="stone/cobblestone"
...
}

and have the 3D engine figure out trivia such as what the face-planes
and texture projection axes are (traditional Quake-style maps exist at
the level of specifying individual faces).
Post by Steve Wart
I think we really do need an ontology for architecture, avatars,
environments, etc. as a common foundation in the world.
possibly, ultimately all levels should be expressed, but what should be
fundamental, what should be expressed in each map, ... is potentially a
subject of debate.
Post by Steve Wart
The Teatime model seems promising
Teatime protocol is unscalable and insecure. It is suitable for LANs
where you trust the participants, but would die a slow, choking death
if faced with 'flash crowds', 'script kiddies', and their like. No
variation on Teatime will ever work at scale. Transactions scale
poorly and have plenty of flaws [2]
But there are some lessons you can take away from Teatime. Use of
temporal semantics is a suitable basis for consistency even without
transactions - we can tame this with a more commutative/idempotent
model and /eventual consistency/. Explicit delay is an effective
approach to achieve near wall-clock determinism in the face of
distribution latencies (e.g. a signal propagates to multiple clients,
but triggers at some specific time in the future).
I have developed a very simple and effective programming model -
Reactive Demand Programming - for solving these and related concerns
[3]. One might think of RDP as a fusion of eventless FRP and OOP -
i.e. OOP where messages and responses are replaced by continuous
control signals, and state is primarily replaced by continuous
integrals. RDP is, by no small margin, the most promising model for
developing modular, federated, distributed command and control
systems, augmented reality systems, and 3D worlds.
I am not familiar with the Teatime protocol. apparently Wikipedia
doesn't really know about it either...
Post by Steve Wart
Croquet always felt awkward to me, partly it was performance, but
it was also because some of the primitives were too primitive.
I agree that this is a problem. VRML is a problem for the same reason
- i.e. it is not clear what the physics should be, nor how we should
recharacterize for a different artistic style, and so on.
yep.

half-assedly ripping off Valve's designs and adapting them to a more
open world structure could make some sense...

then, say, one has something like:
entity {
classname="target_changelevel"
map="sghttp://www.someserver.com/foo.map"
targetname="t34"
}

or with a more Valve-like syntax:

entity {
"classname" "target_changelevel"
"map" "sghttp://www.someserver.com/foo.map"
"targetname" "t34"
}

note (Valve Map Format):
http://developer.valvesoftware.com/wiki/VMF_documentation


my own recent map format design is partly influenced by this format...


or such...
David Barbour
2011-08-10 00:37:51 UTC
Permalink
ideally, we should probably be working with higher-level "entities" instead
of lower-level geometry.
I agree with rendering high-level concepts rather than low-level geometries.

But I favor a more logical model - i.e. rendering a set of logical
"predicates".

Either way, we have a set of records to render. But predicates can be
computed dynamically, a result of composing queries and computing views.
Predicates lack identity or state. This greatly affects how we manage the
opposite direction: modeling user input.
possibly, ultimately all levels should be expressed, but what should be
fundamental, what should be expressed in each map, ... is potentially a
subject of debate.
I wouldn't want to build in any 'fundamental' features, except maybe strings
and numbers. But we should expect a lot of de-facto standards - including
forms, rooms, avatars, clothing, doors, buildings, landscapes, materials,
some SVG equivalent, common image formats, video, et cetera - as a natural
consequence of the development model. It would pay to make sure we have a
lot of *good* standards from the very start, along with a flexible model
(e.g. supporting declarative mixins might be nice).
I am not familiar with the Teatime protocol. apparently Wikipedia doesn't
really know about it either...
Teatime was developed for Croquet. You can look it up on the VPRI site. But
the short summary is:
* Each computer has a redundant copy of the world.
* New (or recovering) participant gets snapshot + set of recent messages.
* User input is sent to every computer by distributed transaction.
* Messages generated within the world run normally.
* Logical discrete clock with millisecond precision; you can schedule
incremental events for future.
* Smooth interpolation of more cyclic animations without discrete events is
achieved indirectly: renderer provides render-time.

This works well for medium-sized worlds and medium numbers of participants.
It scales further by connecting a lot of smaller worlds together (via
'portals'), which will have separate transaction queues.

It is feasible to make it scale further yet using specialized protocols for
handling 'crowds', e.g. if we were to model 10k participants viewing a
stage, we could model most of the crowd as relatively static NPCs, and use
some content-distribution techniques. But at this point we're already
fighting the technology, and there are still security concerns, disruption
tolerance concerns, and so on.

Regards,

Dave
BGB
2011-08-10 08:49:39 UTC
Permalink
Post by BGB
ideally, we should probably be working with higher-level
"entities" instead of lower-level geometry.
I agree with rendering high-level concepts rather than low-level geometries.
But I favor a more logical model - i.e. rendering a set of logical
"predicates".
Either way, we have a set of records to render. But predicates can be
computed dynamically, a result of composing queries and computing
views. Predicates lack identity or state. This greatly affects how we
manage the opposite direction: modeling user input.
note that at a conceptual level (in the map format), entities are still
declarative. whether or not they have "identity" is also uncertain.

at runtime, entities have state and identity, but need not necessarily
map 1:1 with those present in the map definition. in my engine, both
types of entity actually have different in-memory types and representations.
Post by BGB
possibly, ultimately all levels should be expressed, but what
should be fundamental, what should be expressed in each map, ...
is potentially a subject of debate.
I wouldn't want to build in any 'fundamental' features, except maybe
strings and numbers. But we should expect a lot of de-facto standards
- including forms, rooms, avatars, clothing, doors, buildings,
landscapes, materials, some SVG equivalent, common image formats,
video, et cetera - as a natural consequence of the development model.
It would pay to make sure we have a lot of /good/ standards from the
very start, along with a flexible model (e.g. supporting declarative
mixins might be nice).
fair enough, albeit how I imagined it was potentially a little lower-level.

possibly much of the "baseline" would be defined in terms of various
core entity types, matters of basic scene rendering and representation, ...
Post by BGB
I am not familiar with the Teatime protocol. apparently Wikipedia
doesn't really know about it either...
Teatime was developed for Croquet. You can look it up on the VPRI
* Each computer has a redundant copy of the world.
* New (or recovering) participant gets snapshot + set of recent messages.
* User input is sent to every computer by distributed transaction.
* Messages generated within the world run normally.
* Logical discrete clock with millisecond precision; you can schedule
incremental events for future.
* Smooth interpolation of more cyclic animations without discrete
events is achieved indirectly: renderer provides render-time.
sounds vaguely similar to something I had done long ago.
Post by BGB
This works well for medium-sized worlds and medium numbers of
participants. It scales further by connecting a lot of smaller worlds
together (via 'portals'), which will have separate transaction queues.
It is feasible to make it scale further yet using specialized
protocols for handling 'crowds', e.g. if we were to model 10k
participants viewing a stage, we could model most of the crowd as
relatively static NPCs, and use some content-distribution techniques.
But at this point we're already fighting the technology, and there are
still security concerns, disruption tolerance concerns, and so on.
fair enough.

I would likely assume using a client/server model and file-based worlds.
granted, an issue is that "level of abstraction" could become an issue.
Casey Ransberger
2011-08-10 03:43:29 UTC
Permalink
This is actually exactly what I mean when I'm talking about turtles. I want
to be able to express a cartoon fairytale castle that uses forced
perspective to look bigger than it is in as little code as possible. Terrain
seems best arrived upon by way of parameters to fractals, but I haven't
figured out a way to this with man made structures quite yet (I'm sure
there's a way to do it, and I don't count the Seattle Art Museum, which just
looks like an amorphous blob.)
Post by David Barbour
Post by Steve Wart
3D design is extraordinarily expensive to develop properly
That is not an essential property of 3D design. We could have an ontology /
'markup language' just for building and animating avatars, similar to
dressing up a doll, if we want to make one. And a modular ontology for
buildings (including concepts such as crenelations and gargoyles). And
another for environments. Etc. Given a suitably modular meta-language, we
can even have dedicated languages for describing zombies.
I see the impoverished languages of today as an opportunity. For
accessibility reasons - e.g. desktop vs. iPhone access to a world - it is
preferable that we develop in these high-level ontologies anyway.
My own vague interest has steered me towards modular, reusable,
multi-player interactive fiction - with a lot of inspiration from the Inform
7 language [1]. I have a bunch of half-formed designs from my earlier work
on the subject, and my efforts in language design.
Post by Steve Wart
And also note the lack of porn (although WoW has a high level of
titillation it also has been very successful in attracting women).
Lol. Pornography is a human trait with an ancient and ignoble history, even
if male dominated. I once watched a rather funny (but somewhat perverted)
video called 'Ballad of the Sex Junkie' developed in WoW. It's NSFW, but is
tame enough for Youtube.
Anyhow, I'm speaking at the federated world level. It would be silly to
deny that those red-lights districts will exist. This rule is the same for
all computer security: you cannot protect against a threat by ignoring it! I
prefer soft security, wherever possible, and this means recognizing and
accommodating threats in order to gain some control of them. By recognizing
red lights districts, and the inevitable fallout (such as naked avatars
waltzing through worlds), we can isolate them (e.g. by ensuring that the
avatar has suitable clothing upon entering a 'no shirt no shoes no service'
world).
Post by Steve Wart
The original concept of VRML as a standard in the hypertext model still
makes sense to me, but the gaming platforms seem to prefer the silo model.
VRML is an awfully low-level ontology for building 3D models! I would
suggest that this is part of *why* we favor the silo model.
Think about what it would take to build designs that let us achieve
something similar to CSS for 3D and avatar animation. Separation of artistic
rendition (presentation) from content is important. Anything short of that
is ultimately unsuitable for world mashups! Working with cones and boxes is
not the right level for this.
I think we really do need an ontology for architecture, avatars,
environments, etc. as a common foundation in the world.
Post by Steve Wart
The Teatime model seems promising
Teatime protocol is unscalable and insecure. It is suitable for LANs where
you trust the participants, but would die a slow, choking death if faced
with 'flash crowds', 'script kiddies', and their like. No variation on
Teatime will ever work at scale. Transactions scale poorly and have plenty
of flaws [2]
But there are some lessons you can take away from Teatime. Use of temporal
semantics is a suitable basis for consistency even without transactions - we
can tame this with a more commutative/idempotent model and *eventual
consistency*. Explicit delay is an effective approach to achieve near
wall-clock determinism in the face of distribution latencies (e.g. a signal
propagates to multiple clients, but triggers at some specific time in the
future).
I have developed a very simple and effective programming model - Reactive
Demand Programming - for solving these and related concerns [3]. One might
think of RDP as a fusion of eventless FRP and OOP - i.e. OOP where messages
and responses are replaced by continuous control signals, and state is
primarily replaced by continuous integrals. RDP is, by no small margin, the
most promising model for developing modular, federated, distributed command
and control systems, augmented reality systems, and 3D worlds.
Post by Steve Wart
Croquet always felt awkward to me, partly it was performance, but it was
also because some of the primitives were too primitive.
I agree that this is a problem. VRML is a problem for the same reason -
i.e. it is not clear what the physics should be, nor how we should
recharacterize for a different artistic style, and so on.
Regards,
Dave
[1] http://inform7.com/
[2] http://awelonblue.wordpress.com/2011/07/05/transaction-tribulation/
[3] http://awelonblue.wordpress.com/2011/05/21/comparing-frp-to-rdp/
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
Casey Ransberger
BGB
2011-08-09 19:45:22 UTC
Permalink
Post by Casey Ransberger
Inline and abridged... and rather long anyhow. I *really* like some of
the ideas that are getting tossed around.
[crap... it has taken me about 4 hours trying to write a response...
grr...].
Post by Casey Ransberger
I almost missed this thread. I'm also hunting that grail. VR for
consumers that isn't lame. CC'd FONC because I think this is
actually relevant to that conversation.
My feeling is, and I may be wrong, that the problems with Second
[sorry in advance, I mostly ended up running off in an unrelated
direction, but maybe it could still be interesting].
You're fine, I do it all the time:)
ok.
Post by Casey Ransberger
IMO, probably better (than centralized servers) is to have
independent world-servers which run alongside a traditional
web-server (such as Apache or similar).
This appears more or less to be the way OpenQwaq works. I'm pretty
sure that I haven't fully comprehended everything the server does and
how that relates to the more familiar (to me) client, though. I note
that the models and such seem to live on the server, and then get sent
(synced?) to the client.
fair enough.

probably a VR server would be mostly like a typical game server, which
would mean:
has connections to all the people currently in the world;
manages basic game physics (gravity, collision detection, ...);
...
Post by Casey Ransberger
one can jump to a server using its URL, pull down its local
content via HTTP, and connect to a server which manages the shared
VR world, ...
Ah, you're talking about running in a web browser? Yeah, that will
probably happen, but the web browser strikes me as a rather poor
choice of life support system for a 3D multimedia collaboration and
learning environment at least as of today... OTOH I guess it solves
the problem of not being able to deploy (e.g.) GPL'd code on platforms
like iOS. I should say that I'm a huge fan of things like Clamato and
Lively Kernel, but I'm not sure the WebGL thing is ready for prime
time, and I'm not sure how something like e.g. Croquet will translate
at this point in time. I also don't have a Croquet implemented in
Javascript lying around anywhere, and it's not exactly a small amount
of work to implement the basis. I don't even understand how all of the
parts work or interact yet...
actually, nope...

URL's and HTTP are not the sole property of web-browsers.

for example, RealPlayer itself makes use of URLs, ...
and a URL is much preferable, say, to having to type in the server's IP
address...


in this case, the VR client would likely see HTTP roughly as a
filesystem, which it would use to pull down any server-specific files.
the one issue though is that HTTP has traditionally done a poor job with
allowing the client to know which files have changes, hence why
browsers/proxies/... are prone to endlessly re-download everything.

a potential extension would be to have some way of detecting the
change-state of a list of files.

say:
POST /game/ls.php HTTP/1.1
header-gunk...

list of files and/or patterns.

HTTP/1.1 200 OK
header-gunk...

myfile.txt 2011-06-24T14:32:19-0700 a0d9adceccbecd7300db02e5fee1b800 ...
textures/foo/bar.png ...
...

or essentially, the filename, its modified date, its current hash, ...


other tasks would be having ways to identify and connect to the
world-server, ...
the traditional method is to have a "well-known port", but a partial
downside is that this requires all services to be provided by a single
server process.

often (or, at least, in the past, when I had done something vaguely
similar based on a modified QuakeWorld server) was to have a separate
server process for each active map. now, when a client wants to enter an
areas, it can pull down a list from the server, say:
map/somemap.map 153.229.13.109:2350
map/anothermap.map 153.229.13.111:43015
...

which the client then connects to based on which map it wants.
also possible is to use an HTTP request to fetch the server for a map.

GET /game/server.php?map=map/mymap.map
...
HTTP/1.1 200 OK

153.229.13.110:22653

...

"better" could be, however, to use an XMPP-like protocol augmented with
HTTP or HTTP-like capabilities, which would at least save on having a
lot of HTTP-based request/response stuff.

a slight issue though would be the risk of an overly complex protocol
stack, and the possible need to create a more specialized and
"streamlined" protocol, which could handle the same tasks as XMPP, HTTP,
and a world-update protocol (reporting on where everything is in-world,
...).

the risk though is that this could break compatibility with some
existing apps, for example, now "wget" could no longer be used to fetch
VR world resources, ...
Post by Casey Ransberger
a partial issue though becomes how much client-specific content to
allow, for example, if clients use their own avatars (which are
bounced along using the webserver to distribute them to anyone who
sees them), and a persons' avatar is derived from copyrighted
material, there is always a risk that some jerkface lawyers may
try to sue the person running the server, or the author of the VR
software, with copyright infringement (unless of course one uses
the usual sets of legal disclaimers and "terms of use" agreements
and similar).
Heh, yes. Fortunately there are places one can go to purchase assets
which can then be used under commercially compatible licenses... to be
honest, though, the avatar I've been testing with is *cough* Tron.
Found it on the web and couldn't really resist. Got to take him out of
there before I can deploy anything, I think, but I Am Not A Lawyer, so
I can't say that I actually know, and like most folks, I'm going to
play it safe... what I do know is that this is slightly embarrassing :O
yeah...

the main worry is mostly lawyers looking for a little money...
after all, their paycheck is getting lawsuits and firms with deep
pockets to pay their legal expenses over such a suit, or if called in on
defense, they still get paid, ...
Post by Casey Ransberger
Working on an original protagonist/avatar for my "game" but she's not
quite done yet. It's all dialed in but the clothes aren't right yet.
Having to learn to use this pile of expensive 3D animation software as
I go... I really wish I could just draw everything using a pencil and
then use a lightbox to transfer the keyframes to cell and paint, but I
don't know how to make hand drawn animation work in 3D. This is
actually why I was curious about the availability of the sources to
SketchPad, because that constraints in 3D idea seems to underly the
automated inbetweening that goes on nowadays and you could do stuff in
3D using a light pen with SketchPad, which seems better than what I
have now in a lot of ways.
yeah...

or a person could draw stuff, and have a program figure out how to map
it to a 3D model or similar, ...

the great problem is that there are either technical challenges, or an
uncertain way to improve the usability of existing tools.

another uncertainty is how to move beyond some of the current
limitations of mesh-modeling, as ideally some sort of solid-modeling,
like say, CSG-based, would allow interesting properties (however, sadly,
traditional CSG offers some inconveniences vs mesh-modeling, such as
needing to make everything out of primitives or other convex solids, ...).
Post by Casey Ransberger
allowing user-created worlds on a shared server (sort of like a
Wiki) poses similar problems.
the temptation for people to be lazy and use copyrighted
images/music/... in their worlds is fairly large, and nearly any
new technology is a major bait for opportunistic lawsuit-crazy
lawyers...
So it *seems* like the way most businesses deal with this is by taking
UGC down without quarter whenever someone complains. I'll probably end
up having to do something like this. It's still painful because one
then needs to employ people to actually handle that every day. I don't
know, maybe there's some way to use community policing to accomplish
this.
In my view, though, if it happens, it isn't the worst problem in the
world to have. It means someone noticed that your product/service or
what have you exists! And if it was fatal, I don't think YouTube would
still be on the internet. In fact all of that "bad press" probably
helped YouTube get traction.
yeah, pretty much...
Post by Casey Ransberger
yep, and there is a question of what exact purposes these 3D
worlds serve vs more traditional systems, such as good old web-pages.
I think being able to point at things and see by the eyes and the
angle of the head what people are looking at (shared attention) are
probably pretty powerful in general. We have a very disjoint
communication experience... I have to keep track of phone calls,
video, the (grumble) stream of mostly useless information which comes
my way from the social networking site that my friends have
increasingly replaced other, less noisy communication mechanisms with.
yep.
and I am a bit too lazy to really bother too much with social-networking
sites.
have an account, but don't often log in unless someone sends me a
message or similar.

I am much more in preference of email and usenet as these can run in a
dedicated app and are considerably less of a hassle:
no logging in, jumping from place to place, waiting for page reloads, ...

Thunderbird also has the nifty feature of highlighting folders with new
stuff (since whenever I last looked, ...). IMO, standalone apps have
generally delivered an overall better "quality of experience" than
typical "web-apps" have.
Post by Casey Ransberger
And what kind of error message is "Message is too long." when I didn't
even write enough to constitute a short story? It's hard to actually
make a whole point with the stuff people are using now if you like to
supply supporting arguments.
pretty much...

the usual limits are hard-coded and arbitrary, like say, disallowing a
message longer than 10000 characters or similar. nevermind when one
can't provide a link to their own server without it automatically being
regarded as a malicious link and/or scrubbed, requiring one to break it
up into multiple pieces and have the person re-assemble it on the other
end, ...

so, social networking sites have reinvented email, poorly...
Post by Casey Ransberger
The best way to have a conversation with someone is in person, but
with my friend in Florida, this gets expensive quickly, etc... and I
won't be able to visit my friend in Argentina very much at all, much
less introduce him to friends I've made in Seattle in person real
soon, I'd have to save to do that.
potentially, if generalized VR can manage to escape the land of
"generally unusable crap".

games like WoW, ... sort of manage to be at the same time sort of like a
game and sort of like a social-networking site, so potentially
general-purpose VR may look sort of like an open-ended version of WoW.
Post by Casey Ransberger
I think when the 3D displays get cheap, natural user interfaces become
common, and computer animation starts to exit the "uncanny valley"
this stuff will start to look like a pretty good idea. The consumers
I've talked to pretty much tugged my coat and said adult the
equivalent of "Momma, want" but I haven't convinced any business folks
that trying to sell it is a good idea yet, and I have a feeling that's
going to be pretty hard.
yeah...

as-is, it has to integrate well with a mouse+keyboard style UI, but then
faces the problem that most traditional games are designed to be
"screen-hogging" and essentially hinder switching freely between using
them and using other apps, which may be a necessary feature of a
"useful" general-purpose VR.

for example, one might need their VR app to default not to automatically
going full-screen and-or engaging in mouse-grab (sort of the opposite of
typical games).

however, OTOH, drag-to-look-around is generally considered horribly
inconvenient vs the typical mouse-grab free-look behavior of games.

some apps like QEMU and VMware use mouse-grab but have a keyboard
shortcut to release mouse-grab, which works "ok".

ultimately, something "more clever" may be needed.
Post by Casey Ransberger
games is a major application area for 3D, but the more open-ended
world that is non-game systems is a much bigger problems, and the
relative merits of 3D are much less obvious.
Yeah and a new medium is like... it's like pitching that an investor,
who really wants to invest in a nice painting, should instead invest
in a new kind of canvas. Ends up being a hard sell. I looked at the
list of universals and settled on "play" as the best bet, so I focused
on ways I might build a game of some sort in there. If I can get the
tech out there, people will pretty rapidly figure out that it isn't
really a game, but merely contains one. And then people will likely
figure out what it's for on their own, bit by bit. This is my
thinking, anyway, and I may well be wrong.
yep, and meanwhile I have created a bunch of "tech" but not a whole lot
terribly marketable as a game.

so, hell, I have:
3D engine;
DCC tools;
ECMAScript-variant VM;
...

now all I really need is client/server content distribution and
networking and similar, and I probably would have something sort of like
a 3D-engine/browser hybrid...
Post by Casey Ransberger
a partial issue at the time though is potentially the reasonably
high costs of producing decent-quality 3D content (models, maps,
...) in contrast to most other content.
When I realized that I was going to need the paraphernalia of 3D
gaming, there went my savings... and a lot of my time, since I was the
only person I knew who'd done any animation.
yep. I wrote my own tools initially because:
I couldn't find anything "good" in FOSS-land;
I don't have much of a budget;
I personally dislike piracy;
...


sadly, my tools are still not "good" in an objective sense, but they do
mostly what I need of them.
I am also left with the issue of the lack of an ideal 3D model format.

currently, I am mostly using the AC3D format for mesh models (along with
several other special-purpose formats), but it is not "ideal" (I can't
even really extend the format so far as to have it remember texture
positioning without potentially breaking it).

ideally:
text-based;
potentially XML-based, or based on a more streamlined yet extensible syntax;
supports directly representing "game-like stuff";
potentially, usable both as a 3D model format and scene-representation
format;
less stupidly designed (unlike VRML and X3D);
not a big pile of DCC tool and Schema-based crud (unlike Collada);
the format should deal with its content being spread over multiple
independent files;
...

a possible option would be a mix of a PovRay and Valve-400 like format, say:

model {
mesh {
name="a cube"
vertices [
(16 0 0)
...
]
faces [
(0 -1.0 -1.0)
...
]
...
}
mesh {
...
}
...
}

(note, technically this would be treated similarly to XML).

long ago in the past, I had also done a few things with an s-expression
based 3D model format.

sadly, at best, this would introduce "yet another tool-specific format".
Post by Casey Ransberger
the industry-standard tools are typically expensive, have a steep
learning curve, and still leave content production a rather long
and tedious process (it is, in contrast, much faster and easier to
produce spiffy-looking 2D graphics artwork, or for that matter to
edit documents in a WYSIWYG editor).
I'd even go so far as to say that a lot of this stuff is still rather
counterintuitive, but I know mileage probably varies. And yes, my
kingdom for a way to do 3D animation that resembles what I did with my
pencil, my fountain pens, the cells, the paint, etc. It ends up being
more like a combination of Python, puppeteering and sculpture
nowadays, and I have confidence that I can rock the Python, even
though I've never used it, but the other two are things I don't have
previous experience with.
The automatic (I can only assume constraints based under the hood, but
it usually doesn't come with source code except for Blender, and the
UI on that one seems strongly resistant to the ways that I want to
interact with it out the gate) inbetweening often does weird,
grotesque, physically impossible contortions, causing me to have to go
back and do my own inbetweens manually on a regular basis. I've been
using Blender mostly to convert file formats, and commercial tools to
do the animation work, because I simply didn't have time to learn to
use Blender effectively. The commercial stuff seems a little less...
alien, but it's also not terribly easy to learn to use. For me.
Also, a hand drawn character looks less... creepy than the current
state of the art puppet, even if the puppet is more realistic now.
Uncanny valley. In a long shot I can make the spitting image of a real
life human being surrounded by beautiful, lush, procedural/fractal
terrain, but in the close up, it just makes me want to cry a little
and call my mom.
And, the other issue I have is spending hours to see what the final
render will look like for a single frame is isn't economical, so I
have to work with images that don't look anything like the final
product most of the time. This is a pretty big problem for me, and the
only solutions I know about are a) buy or rent a compute cluster, and
b) wait a long time. I can't currently afford to do either, so I have
to work with preview renders in the wrong resolution, the shading
minimized, and basically clothes that look completely tattered and
don't even move right until I'm ready to cross my fingers and pray
that the final render won't come out completely wrong for some reason
I couldn't perceive in the early version: this is a lot like that
awful "get coffee and maybe lunch while my code fails to compile" thing.
yeah...

I did my own tools (personally as I have never had a whole lot of luck
with Blender...).

sadly, my models aren't nearly so good...
I am usually happy enough with "doesn't look like total crap and/or kill
the framerate".
also, I currently lack a "good" automatic LOD algo, as my existing algos
tend to do bad things to the geometry.
Post by Casey Ransberger
also, there is also the general problem of a lack of non-suck free
DCC tools.
yes, I have my own 3D DCC tools, but sadly, they are not exactly
non-suck either...
Really? That's just so cool. Right now I feel like the caveman in 2001
who figures out he can use the one thing to smash the other thing and
gets really excited about ways this might help him eat after he has an
encounter with the monolith. 3D is a tough nut to crack.
harder than implementing the tools is making them usable, or getting
around to doing all of the piles of UI polish which is taken for granted
in typical end-user GUI apps, or having an assorted selection of
load/save formats, ...
Post by Casey Ransberger
another problem at the present time is the general lack of
freely-available 3D artwork, meaning much content production has
to start from the ground-up, from basic cubes and cylinders
(again, this may have something to do with the present sad state
of DCC tools).
+1 and we know what the problem is too, it's still too expensive for
most people to learn, do and give away.
yep.

if 3D could become more readily accessible, likely so too would reusable
artwork.

then again, no longer could sites be sitting around making money trying
to sell "production quality" 3D models of things like office-chairs and
coffee cups...


better yet if tools could get the idea out of their head of trying to
create single massive "scene" files which directly contain *everything*
in the scene, and instead focus on a more "bottom-up" methodology, say,
where one links to other models, and links to textures and shaders, ...

so, rather than "importing" a model into the scene, or "exporting" an
object to a file, one is like:
"give me an instance of the linked-to object here".

so, one links to models and shaders defined in external files, and
possibly has a "palette" of recently or frequently-used objects,
shaders, ...

this seems to reflect a bit of a difference between traditional DCC
tools and games, with games tending more towards the disjoint-files
strategy.
Post by Casey Ransberger
Minecraft has been running with an honor system for awhile now,
and people just don't seem to mess with each other as much there.
yes, but a lot may also be that there is no centralized Minecraft
world, but instead most servers are run on an individual basis and
only admit a limited number of players.
That's an interesting point. I'm going to run in a different direction
on this one, though. There's a service called freeshell which is about
free UNIX shells. They had some trouble with abuse once, and so they
voted to change the rules slightly, so that in order to get email you
had to make a $1 donation (this got rid of almost all of the spam) and
to get a more fully featured shell account, you had to pay... I think
my lifetime ARPA membership cost me like $30 if I remember? This got
rid of more "sophisticated" forms of abuse. To set up certain kinds of
services requires greater contributions. This seems to have worked for
freeshell, and may also work for immersive technologies as well. Note
that e.g. Second Life offers free accounts...
yes, ok.

there is some merit to this, but there is the downside that it keeps out
many potential users as well.
Post by Casey Ransberger
thus, the target for destructive behaviors or vandalism is spread
very thin (people are far less prone to try to vandalize peoples'
personally-run servers).
This is a good point, but it may also be that people are playing
Minecraft with more people they know in real life, which is also a
little bit interesting, no?
yep, albeit some of us don't know many people IRL...
Post by Casey Ransberger
but, in some ways, I think Minecraft represents something
"fundamental", but I don't really know what it is. in many ways,
it has created something thus far reasonably unique in the world
of gaming.
Well, so it does two things really well, it recognizes that people of
all ages can stack up blocks shaped like things in the real world and
then knock them over for fun. It also includes cellular automata that
users can use to figure out how to make complex dynamic behaviors
without having to necessarily be taught (though programming with
Redstone is not the first thing in the game people figure out... you
have to figure out how to survive and dig down to the bottom of the
world in order to obtain it, and programming is relatively hard.)
yep.

a problem with redstone though is its very low density and its
relatively poor performance.

say, if redstone had integrated logic-gates, and could run at, say,
100Hz, and was easier to route, ... potentially more interesting things
could be done. this is unlikely in the near future though.

idle thoughts:
premade logic gates;
"redstone sticks", which could route signals up/down, and could be
placed more like solid blocks;
...
Post by Casey Ransberger
I also find voxels interesting, but there is an uncertainty where
the future of 3D lies (moving on from the present system of
polygonal geometry). voxels are one option, but not the only option.
I like fractals. But it ends up being a mix in all likelihood. Voxels
are great for doing clouds at a distance, etc. But up close clouds
made out of little cubes aren't very convincing. Minecraft overcomes
this by making low resolution textures and huge voxels a kind of
fashion statement.
yep.

the problem with fractals is that they don't really allow "direct
expression" as with the other technologies.
Post by Casey Ransberger
in my case, I build partly on a different system: CSG.
Had to google that and it showed up below the fold. You mean this, is
that correct?
http://en.wikipedia.org/wiki/Constructive_solid_geometry
yep, put a link to the above at the end.

sadly, there is a lot more that can be done with CSG than just what the
article mentions (especially if combined with solid and soft-body
physics modeling).
Post by Casey Ransberger
although CSG is not exactly new either, people are far from using
CSG to its full ability (like voxels, objects built from CSG are
"solid", and so it is possible to cut/modify/... CSG-based objects).
also, the initial up-front cost of CSG (memory, rendering
overhead, ...) is much lower than with voxels (and, CSG can
usually be fairly easily converted into polygonal geometry for
rendering/...). so, when a piece of CSG geometry is altered, its
polygonal geometry can be fairly easily rebuilt (unlike with a
mesh model, where there is nothing "in" the model beyond its
external geometry).
a downside of CSG vs voxels is that voxels have all of their
complexity up-front, so however the scene is changed the voxels
remain approximately the same cost. however, supporting alteration
of CSG-based geometry steadily increases the amount of internal
geometry (creating new primitives and cutting holes with negative
primitives, ...). so, a Minecraft-like game world couldn't be so
easily created with CSG.
I have before had ideas for trying to combine CSG and voxels, but
this is itself a complex problem area.
even more compelling would be if both could be combined with FEM
and similar, allowing for structures which could be built
initially from CSG primitives, broken apart or modified at a fine
level of detail (as with voxels), and subject to mechanical forces
(bending, flexing, fracturing under strain, shearing, ...).
It's so much fun being an eternal beginner! Okay so now you're talking
about this, right?
http://en.wikipedia.org/wiki/Finite_element_method
yep.
Post by Casey Ransberger
It sounds fantastic?
yep, except for the complexities of getting acceptable performance on
consumer-grade hardware...

CSG is not so bad, but FEM is a much bigger problem.
Post by Casey Ransberger
how to implement it?...
You got me.
yeah...

basic aspects can be imagined, but trying to do the "putting it all
together" thing in my head, I am left to wonder how exactly all of these
parts could be put together, which is often a worrying matter.
Post by Casey Ransberger
how to make it able to be used at *any* real scale on current HW
(voxels+FEM is not exactly a lightweight combination).
What if the "simulation overhead" could be distributed to every
machine currently participating? One might even apply an idea like
ocular occlusion in order to avoid simulating things that no one is
present to perceive at the cost of some realism (if a tree falls in
the forest, and there's no one there to compute it, it can't affect
the butterfly that flaps its wings and makes it rain in Singapore, to
mix some metaphors badly.)
possibly, but a big problem with this is:
doesn't work in single-player or if only a few people are involved;
network bandwidth and latency...

something like the above would necessarily involve pushing a lot of data
around very quickly and ideally without all that noticeable of delays.
this is a problem.

even client/server is a bit of a problem.

if we were all on a single big-ass LAN with 1000BaseT or similar, maybe
it would work.
not so good though if a large number of clients are logging in with
1.5Mbps ADSL, where it is hard enough getting YouTube videos buffered in
a timely manner.


a lot of games do a lot of things like this with "client-side physics",
for example, things like the behavior of ragdolls, particle effects, ...
are typically done purely on the client.

however, there is a potential risk associated with the possibility of
players ending up essentially in different versions of a world, say, if
on each client, the physical models of the world begin to notably diverge.
Post by Casey Ransberger
then again, there may be a partial way around it: the system is
built top-down from CSG.
at the high-level, it is just CSG primitives, and this is how the
world is initially built;
CSG geometry may be animated/... through more traditional means
(treated as polygonal geometry and subjected to weighted vertex
transformations, ...);
as-needed, the geometry may be "decomposed" into analogous voxel
geometry (if done well, the transformation can be handled a single
primitive at a time and in a reasonably transparent manner).
and, if most of the world remains as CSG, then the overall cost
can be kept lower, and in regions where there is severe
alteration, it transforms into voxels.
So my fractal planet takes some voodoo that I don't yet understand in
order to be converted to a particular level of detail (the mesh and
map) that I can use with conventional 3D applications, and I've also
been thinking that multiple representations for different kinds of
graphics and physics related computations might be an interesting line
of inquiry, but sadly I'm presently missing math that I need to
envision what that looks like. It's nice to see someone else who
understands this stuff better than I do thinking along these lines,
especially since I currently have no plan for what I'm going to do
when I find out that a real time physics engine is what I actually
need, and I understand that these are not small or easy to build
either based on the bit of web searching that I was able to do.
yep...

sadly, the current "state of the art" is still more like:
"we can build a tall building out of sticks and then knock it down".

but, moving much past this point (and doing it in real-time) is,
computationally, a fairly complex problem.
Post by Casey Ransberger
an example of this would be, say, one creates a big sphere of
"sand" in the sky, and it proceeds to fall due to gravity (at this
point, the engine sees a plain sphere). upon contact with the
ground, high-forces (exceeding certain defined constants) cause
the sphere to break apart into voxels, which are then subjected to
internal physics (moving forwards at their prior speeds), and then
stopping once there is nowhere for them to go. potentially, the
engine leaves this as-is, or may begin trying to "recompress"
these regions of voxels back into CSG geometry where possible
(say, a big cube of voxels all of the same size is replaced with a
"cube" primitive mapped to the given material type).
Yep, that sounds really cool, and my gut says reducing to primitives
wherever possible might help make performance of higher resolution
volumetric simulations more feasible? Although I suppose if all you
have is voxels, then you only have a single primitive, and number of
primitives in play is supposed to affect render time? *Sigh* need more
math.
the problem with voxels are:
firstly, a huge 3D array:
I want 16x16x16 voxels, ok, 4096 elements (16kB at 32 bits/voxel);
now I want 256x256x256, ok 16M elements (64MB at 32 bits/voxel).
say, 1024x1024x1024, 1G elements (4GB at 32 bits/voxel).

these are not good odds...

secondly:
huge processing requirements.

every element has to be processed, somehow.
again, an x^3 curve isn't ideal.

even if voxels, as a model, are fairly dead-simple, having huge numbers
of them is a cost.


Minecraft manages to scale as well as it does because, technically, the
maps are very low-density.
even then, it is still a strain to make it perform well on conventional
hardware.


CSG objects have an entirely different set of problems.


a partial compromise would be to use voxels, but use a hierarchical
representation, and "compress" simple patterns into solid spaces (making
them cheaper to render/skip/...).

say: the scene is recursively divided into groups of 16x16x16 elements.

this would mean, say, a 1024x1024x1024 space could be encoded in
considerably less space (say, a single block, or 16x16x16 voxels). at
each level, altering the space could result in further expansion, down
to the minimum level, and the engine could aggressively "recompress" spaces.

however, scene complexity would strongly impact memory requirements and
performance.

another problem is that there is still a "minimum voxel size", itself
posing some issues (essentially, one has integer coordinates all over
again).


another problem is that mixing voxels with non-trivial physics
simulation is itself a complex issue (it involves ability to transfer
groups of voxels to free-moving objects, perform collision detection,
and potentially re-merge stationary objects, ...).
Post by Casey Ransberger
and, from the point of view of the user, they just say a big ball
of sand fall from the sky and splat on the ground, leaving a
sand-pile... (itself subject to being readily deformed, say the
user shoots at it and puffs of sand fly off and pile up elsewhere,
and ones' weapons deform the sand pile and leave their bullets
embedded in the sand, ...).
That's what you want yeah. Stuff that behaves like real stuff:)
pretty much, this would be a "holy grail" of gaming physics at the
present time, a world which looks and behaves something like the
external reality.

if one can mimic both the physical realism and scale, and complexity, of
reality, then one will have done something impressive.

however, this may well require a sort of top-down "lazy-evaluated"
world, where things don't yet exist until interacted with.


it is like, if you wandered into an office, and all of the
desks/chairs/... just magically appear just before you see them, and try
to rummage around in a desk just to have all its contents appear as one
tries to open the drawer, and have an item in the desk transform from a
static piece of geometry to a real object once they grab it, ...

but, again, not an easy problem...
Post by Casey Ransberger
an analogy would be the current partial integration between bitmap
and vector graphics, but what if this were 3D?...
granted, all this is mostly speculation, and not really something
I can pursue at present, and it may still be some years before
this sort of thing is really practical on consumer-grade
hardware... (by "practical" I mean: can be applied in real-time to
large-scale game-worlds, in contrast to the reasonably small
scales typically seen in most voxel+FEM+physics examples thus far).
Imagine something on the scale of an MMORPG and think about ways you
can distribute the math across as many machines as possible. Is what
I've been thinking for doing simulation stuff, anyway. I'm not sure
this is a real advantage but I bet you could at least get yourself a
lot of computing power.
yes. the big killer of course being bandwidth and latency.
a computer can compute far more locally than it can shove over the
typical "thin straw" that is its internet connection.

it is like, one can have 1000 fire-hoses, but one is not going to be
able to have a 10x or 100x powered fire-hose by routing all of the flow
through light-gauge rubber tubing.
Post by Casey Ransberger
also, the comparably lower costs of plain CSG and rigid-body
physics make them currently a much more attractive target, as
these work much better on current HW.
And with most real time physics engines, objects have a habit of
randomly exploding due to the lack of accuracy in the simulation,
which invariably means having to manually install dampening, which can
also cause objects to implode, right? I read a paper about that
somewhere when I first looked into doing physics. There was just no
way I was paying for a commercial engine, etc... and then OpenQwaq
ended up being released, which looked more like what I wanted out of
the box than the "game engines."
yeah, these are problems.

both Havok in the past, and also my own physics engine, have this
problem (as well as my own engines' problem which is that the physics is
"rubbery" and large forces can cause objects to push into or through
each other, and objects will often slide around slightly and "float" in
the ground until the engine flags them as inactive and locks them into
place, ...).

I have seen some simulations though where it appears this problem has
become far less of an issue in recent years (with Bullet and ODE and
similar).

also fairly common (in games using Havok, ...) was to only use fancy
physics for things like debris and random objects, using more
"conventional" game physics (sliding AABBs and similar) for most other
things.

many games which tried to use fancy-physics for everything were well
known for nasty physics bugs interfering with gameplay.
Post by Casey Ransberger
OTOH, Minecraft works as well as it does because its voxels are
very low density and there are almost no large-scale physics (it
is not possible, for example, to build a large structure, and have
it break loose and fall over, ...).
Right, and this is both an artistic constraint that produces
interesting art, and probably a real limitation to the kinds of art
that one can create in Minecraft.
yep.
Post by Casey Ransberger
imagine if, for example, one could build some large electronic
device in a Minecraft like system, print it out, and have a real
and working piece of hardware...
Uh... heh. I wanted to do something like this with my hardware
project, but I looked at what it takes to build an ALU with Redstone
and thought "yeah this thing really needs turtles." Which is what got
me thinking about Logo.
it could very well be possible given the existence of conductive
and semi-conductive polymers, ...
I think yeah just the FPGAs will be good enough for now.
well, FPGAs will be higher-performance. it may depend some of the sort
of device one is trying to make though.
Post by Casey Ransberger
or such...
I'll spend some time reading these. Thank you.
http://en.wikipedia.org/wiki/Voxel
http://en.wikipedia.org/wiki/Constructive_solid_geometry
http://en.wikipedia.org/wiki/Finite_element_method
http://en.wikipedia.org/wiki/Computational_fluid_dynamics
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
Casey Ransberger
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
David Barbour
2011-08-09 23:17:24 UTC
Permalink
On Tue, Aug 9, 2011 at 4:50 AM, Casey Ransberger
Post by Casey Ransberger
I think being able to point at things and see by the eyes and the angle of
the head what people are looking at (shared attention) are probably pretty
powerful in general.
I think you need to balance that against ability to actually see what the
person is looking at. To see both an avatar and the target would imply an
oblique angle on both, which could be a phenomenal waste of screen
real-estate.

In an avatar-less model, you might share attention by other means: tags,
flags, subscriptions, RSS, other annotations. In bulletin boards, and
mailing lists, we get shared attention by simply pushing to the top that
which people have recently commented on.

I think the main benefit of avatars would be support for facial language -
recognizing irritation, sarcasm, surprise, et cetera. These benefits are not
realized with today's technology, except in games such as Heavy Rain that
make significant use of them.
Post by Casey Ransberger
The best way to have a conversation with someone is in person,
I think it depends on the nature of the conversation. There are significant
advantages to written conversations, such as: the ability to spend more time
thinking about our responses, the ability to operate at different times, and
having a written record you can search and reference.

A slightly more formal language can be very valuable; I think an interesting
experiment would be an argument forum where arguments are mapped out with
premises and reduction rules, and the computer system helps us identify
plausibility, internal consistency, and locate (and link) relevant arguments
and counter-arguments for the premises. It could change how we build
arguments.

Speaking in person has advantages of body language (which is surprisingly
expressive to most people) that helps the speaker recognize where
clarification is necessary. But a state-of-the-art 3D avatar won't help much
there. We should train a camera on the user and push facial twitches and
gestures across the network.
Post by Casey Ransberger
Also, a hand drawn character looks less... creepy than the current state of
the art puppet, even if the puppet is more realistic now. Uncanny valley.
I wonder if cel shading can help a lot with bridging the uncanny valley. It
gets you the look and feel of hand-drawn art while allowing state-of-the-art
techniques in developing the 3D models.
Post by Casey Ransberger
I like fractals. But it ends up being a mix in all likelihood. Voxels are
great for doing clouds at a distance, etc. But up close clouds made out of
little cubes aren't very convincing. Minecraft overcomes this by making low
resolution textures and huge voxels a kind of fashion statement.
I suspect you could do some sort of 'anti-aliasing' for voxels, perhaps
using GPU shaders. Use of GPU shaders, for example, can turn an ugly bowl of
triangles into a rather pretty tree (
http://the-witness.net/news/2011/06/witness-trees/).

Beyond that, level-of-detail projections are also quite feasible.
Post by Casey Ransberger
how to make it able to be used at *any* real scale on current HW
(voxels+FEM is not exactly a lightweight combination).
What if the "simulation overhead" could be distributed to every machine
currently participating?
For gaming, that can become a security/cheating risk that is rather
difficult to reason about.

I also think distribution-for-performance should not be a first choice.
There are probably a ton of useful things we can do with level-of-detail,
shaders, high-level culling, etc.
Post by Casey Ransberger
ocular occlusion in order to avoid simulating things that no one is present
to perceive
World physics should be fully deterministic and cheap to compute in the
absence of external influence. This includes NPC schedules and such. When a
metaphor flaps its wings, we should know exactly what the 'game-state'
consequences will be - if there are any.

If we can reduce the world model to a fixpoint continuous integral - using
algebraic signals of time - we can actually compute world state far more
cheaply than would be possible with piecewise discrete-state simulations.
But, either way, computing game state is often a lot cheaper than computing
the animations.

Consider, for example: when two NPCs converse (a sort of 'collision' event
between NPCs) we might model them as exchanging information, objects,
and germs. That would be the 'game-state' consequence of the collision. But
the actual animation might involve handshakes, vocal exclamations, speech
generation. All of that could be elided in the absence of an observer.

The NPCs themselves could similarly be elided if they are part of the
'background' (i.e. to make a city look busy) rather than relevant to game
state.

The distinctions between game state modeling and animation can thus be a
level-of-detail concern.

We can carefully add a dose of indeterminism to the game by modeling an
external controller - a game master, or more than one - and giving some of
those to humans or expert systems. In a multi-player interactive fiction,
NPCs but game-significant not in control of a player are in control of game
masters.

Regards,

Dave
Casey Ransberger
2011-08-10 01:08:17 UTC
Permalink
On Tue, Aug 9, 2011 at 4:17 PM, David Barbour <***@gmail.com> wrote:

The best way to have a conversation with someone is in person,
Post by David Barbour
I think it depends on the nature of the conversation. There are significant
advantages to written conversations, such as: the ability to spend more time
thinking about our responses, the ability to operate at different times, and
having a written record you can search and reference.
Well put. This is an excellent point, and I stand *quite* corrected. I wrote
this while slightly irked that a message I sent via a popular textual
communication medium was "too long."

I still prefer a mailing list for most of the stuff I like to talk about:
case in point.
--
Casey Ransberger
Continue reading on narkive:
Loading...