Discussion:
Jolt, statics?
(too old to reply)
Michael Haupt
2008-04-01 13:59:14 UTC
Permalink
Dear all,

is it somehow possible to have, in Jolt, the equivalent of the
following C code?

void f() {
static int x = 0;
...
x++;
...
}

Best,

Michael
--
Dr.-Ing. Michael Haupt ***@hpi.uni-potsdam.de
Software Architecture Group Phone: ++49 (0) 331-5509-542
Hasso Plattner Institute for Fax: ++49 (0) 331-5509-229
Software Systems Engineering http://www.swa.hpi.uni-potsdam.de/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany

Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel
Michael Haupt
2008-04-01 14:13:48 UTC
Permalink
Hi again,

sorry for sending two mails... there is another, related, question. Is
it possible to have, again in Jolt, some notion of "here", i.e., could
I somehow do something like this:

(set here@ 100)

where here@ compiles down to the precise place in memory where the
corresponding instruction is placed? (Sorry for asking so confusely, I
don't know how to put it better...)

The above code would write to memory in a place very close to the code
corresponding to the set "statement".

Best,

Michael
Post by Michael Haupt
Dear all,
is it somehow possible to have, in Jolt, the equivalent of the
following C code?
void f() {
static int x = 0;
...
x++;
...
}
Best,
Michael
--
Software Architecture Group Phone: ++49 (0) 331-5509-542
Hasso Plattner Institute for Fax: ++49 (0) 331-5509-229
Software Systems Engineering http://www.swa.hpi.uni-potsdam.de/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany
Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
Dr.-Ing. Michael Haupt ***@hpi.uni-potsdam.de
Software Architecture Group Phone: ++49 (0) 331-5509-542
Hasso Plattner Institute for Fax: ++49 (0) 331-5509-229
Software Systems Engineering http://www.swa.hpi.uni-potsdam.de/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany

Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel
Gavin Romig-Koch
2008-04-01 15:30:04 UTC
Permalink
Post by Michael Haupt
sorry for sending two mails... there is another, related, question. Is
it possible to have, again in Jolt, some notion of "here", i.e., could
No problem. In this case the answer is the same: it's not built in, but
you can build it. As you probably know, Jolt code gets translated into
machine code for the machine it's running on. Your going to need to
make your "here" thing machine independent if you want it to be
generally useful.


-gavin...
Michael Haupt
2008-04-02 09:52:12 UTC
Permalink
Hi Gavin,
Post by Gavin Romig-Koch
No problem. In this case the answer is the same: it's not built in,
but you can build it.
thanks. You seem to have a thorough impression of how such an
implementation might look ("no problem") - could you please give me
some more hints? :-)

Best,

Michael
--
Dr.-Ing. Michael Haupt ***@hpi.uni-potsdam.de
Software Architecture Group Phone: ++49 (0) 331-5509-542
Hasso Plattner Institute for Fax: ++49 (0) 331-5509-229
Software Systems Engineering http://www.swa.hpi.uni-potsdam.de/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany

Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel
Gavin Romig-Koch
2008-04-02 17:14:37 UTC
Permalink
Post by Michael Haupt
Hi Gavin,
Post by Gavin Romig-Koch
No problem. In this case the answer is the same: it's not built in,
but you can build it.
thanks. You seem to have a thorough impression of how such an
implementation might look ("no problem") - could you please give me
some more hints? :-)
Sorry, no. I meant my "no problem" was in response to your " sorry
for sending two mails".

How to implement this in Jolt would depend on what you expected to be
able to do with "here".

You gave the example:

(set here@ 100)

which would "write to memory in a place very close to the code
corresponding to the set "statement".", but I don't understand what you
mean by this. Is it your intent that this would give you the ability
to change the machine code generated by Jolt? Or is this meant to be a
way to reserve and refer to memory within the machine code for use as
variables? Or something else entirely?

Perhaps a larger example, or a reference to a similar feature in
another language, or just an explanation of the larger problem you are
trying to solve?


-gavin...
Michael Haupt
2008-04-03 09:54:06 UTC
Permalink
Hi Gavin,
Is it your intent that this would give you the ability to change the
machine code generated by Jolt? Or is this meant to be a way to
reserve and refer to memory within the machine code for use as
variables?
the ultimate purpose would be the latter; changing the machine code
would then also be possible, but that's not what I want to do. It's
rather about approaching something like inline caches (other than
those already present in the id model).
Best,

Michael
--
Dr.-Ing. Michael Haupt ***@hpi.uni-potsdam.de
Software Architecture Group Phone: ++49 (0) 331-5509-542
Hasso Plattner Institute for Fax: ++49 (0) 331-5509-229
Software Systems Engineering http://www.swa.hpi.uni-potsdam.de/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany

Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel
Gavin Romig-Koch
2008-04-03 20:16:46 UTC
Permalink
Post by Michael Haupt
Hi Gavin,
Is it your intent that this would give you the ability to change the
machine code generated by Jolt? Or is this meant to be a way to
reserve and refer to memory within the machine code for use as
variables?
the ultimate purpose would be the latter; changing the machine code
would then also be possible, but that's not what I want to do. It's
rather about approaching something like inline caches (other than
those already present in the id model).
Well, on modern hardware and modern OSs, I really don't think nestling
the inline caches amongst code makes sense - it plays havoc with the
cpu's memory cache lines, and disallows any use of the code in a
multi-threaded or multi-processed way. Much better, I think, to
allocate the caches out of thread local or process local memory. But ...

This is a sketch of how I would go about this, as a first attempt:

I'd use "actives" to provide the interface from the code that uses these
inline caches to the code that implements them. Actives would allow the
user these inline caches treat them like variables, independent of how
they are implemented under the covers. An active is very much like a
syntax macro, except that where as a syntax macro gets "expanded" when
the syntax macro gets called like a function, an active gets "expanded"
when it is either referenced or assigned to like a variable. Look in
Compiler.st to see how actives are implemented, also I'll give you an
example of their use below.

If you just wanted to implement the caches in process lifetime memory
(like "C" local statics), then when a new inline cache is defined, you
will need to get access to the current compiler with a syntax macro, get
access to the GlobalEnvironment through that compiler, add a new
variable to the GlobalEnviroment that won't be confused with user
defined variables, and then create the actives that will reference this
global variable. See Compiler.st for the implementation of Compiler and
GlobalEnvironment.

If you want to nestle the inline caches into the machine code itself,
it's a bit more compilcated. When a new inline cache is defined, you
need to use the code in CodeGenerator-XXX.st to generate the needed
space for the inline cache and the branch around it, and then create the
active to reference the space created. The active would use "Label"s to
keep track of where in the actual code the inline cache ends up.
Currently lables are only used for branch and jump targets, so you might
need to invent a new kind of label for tracking a storage location, but
maybe you can just add some new methods to the current implementation.
Again see Compiler.st for the implementation of Label. Slightly more
interesting and more general would be to invent a new intermediate
Instruction (see Instruction.st) that meant "reserve space in the code
for data", and then expand that Instruction in each of the different
CodeGenderators. Slightly more interesting would to change the
CodeGenerator to be able to put this reserved space before the start of
the function (or after the end) to avoid having to jump around the space.

Yet even more interesting, if your willing to alter to code on the fly,
would be to actually alter the code on the fly, rather than have the
code examine a variable. This would be largely the same as the above,
except that instead of altering a variable when one wanted to change the
inline cache, one would actually alter the call or jump instruction that
used the inline cache.


-gavin...
Alessandro Warth
2008-04-03 20:34:46 UTC
Permalink
Hey guys,
Once closures become part of Jolt, we'll be able to write Michael's
function,

void f() {
static int x = 0;
...
x++;
...
}


as

(define f
(let ((x 0))
(lambda ()
(... (set x (+ x 1)) ...))))


which I think is pretty nice - i.e., easy for the programmer to write, and
confines x in f, just as you'd expect. I wonder if this already works with
Michael FIG's patch...

Cheers,
Alex
Post by Michael Haupt
Hi Gavin,
Is it your intent that this would give you the ability to change the
machine code generated by Jolt? Or is this meant to be a way to reserve and
refer to memory within the machine code for use as variables?
the ultimate purpose would be the latter; changing the machine code
would then also be possible, but that's not what I want to do. It's rather
about approaching something like inline caches (other than those already
present in the id model).
Well, on modern hardware and modern OSs, I really don't think nestling the
inline caches amongst code makes sense - it plays havoc with the cpu's
memory cache lines, and disallows any use of the code in a multi-threaded or
multi-processed way. Much better, I think, to allocate the caches out of
thread local or process local memory. But ...
I'd use "actives" to provide the interface from the code that uses these
inline caches to the code that implements them. Actives would allow the
user these inline caches treat them like variables, independent of how they
are implemented under the covers. An active is very much like a syntax
macro, except that where as a syntax macro gets "expanded" when the syntax
macro gets called like a function, an active gets "expanded" when it is
either referenced or assigned to like a variable. Look in Compiler.st to
see how actives are implemented, also I'll give you an example of their use
below.
If you just wanted to implement the caches in process lifetime memory
(like "C" local statics), then when a new inline cache is defined, you will
need to get access to the current compiler with a syntax macro, get access
to the GlobalEnvironment through that compiler, add a new variable to the
GlobalEnviroment that won't be confused with user defined variables, and
then create the actives that will reference this global variable. See
Compiler.st for the implementation of Compiler and GlobalEnvironment.
If you want to nestle the inline caches into the machine code itself, it's
a bit more compilcated. When a new inline cache is defined, you need to use
the code in CodeGenerator-XXX.st to generate the needed space for the inline
cache and the branch around it, and then create the active to reference the
space created. The active would use "Label"s to keep track of where in the
actual code the inline cache ends up. Currently lables are only used for
branch and jump targets, so you might need to invent a new kind of label for
tracking a storage location, but maybe you can just add some new methods to
the current implementation. Again see Compiler.st for the implementation
of Label. Slightly more interesting and more general would be to invent a
new intermediate Instruction (see Instruction.st) that meant "reserve space
in the code for data", and then expand that Instruction in each of the
different CodeGenderators. Slightly more interesting would to change the
CodeGenerator to be able to put this reserved space before the start of the
function (or after the end) to avoid having to jump around the space.
Yet even more interesting, if your willing to alter to code on the fly,
would be to actually alter the code on the fly, rather than have the code
examine a variable. This would be largely the same as the above, except
that instead of altering a variable when one wanted to change the inline
cache, one would actually alter the call or jump instruction that used the
inline cache.
-gavin...
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Michael FIG
2008-04-05 07:04:38 UTC
Permalink
Hi,
Post by Alessandro Warth
(define f
(let ((x 0))
(lambda ()
(... (set x (+ x 1)) ...))))
which I think is pretty nice - i.e., easy for the programmer to write, and
confines x in f, just as you'd expect. I wonder if this already works with
Michael FIG's patch...
No, my dynamic-closure.patch is only for creating BlockClosure
objects, and they become invalid once the closed-over variables go out
of scope (since the variables live on the stack).

I do have an idea for implementing proper closures, though. I'd
introduce two new syntax elements:

"stack-lambda" would be like lambda except it can capture lexical
variables even if they aren't GlobalVariables. The name is to remind
you that once any of the captured variables are popped off the stack,
all hell will break loose if you try to use the closure again. It
would be really nice to have an "alloca" (hint, hint) to implement
this feature without any memory leaks or dependence on a garbage
collector.

"heap-lambda" would be more clever (bletcherous?). When it is used,
it would discover which lexically captured variables are not allocated
on the heap, then the enclosing "lambda"s and "let"s would detect
these variables and recompile themselves by moving those variables to
the heap (whether as an anonymous global, as "x" in Alessandro's
example above, or an indirection to a GC_alloc'ed storage location).
I'm not sure whether to just fail compilation if the second pass
results in different captured variables, or to keep iterating until
the compilation succeeds.

This is clever because it would give a way for the user to specify
closures with no garbage collection overhead (stack-lambda or
heap-lambda with only anonymous globals). It's a bletcherous hack
because it uses multiple compiler passes.

I'll work on trying these out over the next few days, as time permits.
I surely don't know if this idea qualifies as a "cool closure
implementation" that Ian was hoping for. The idea of multiple
compiler passes makes me somewhat nauseous, but I don't see any other
way to implement heap-lambda short of abandoning stack frames entirely
and allocating everything on the garbage-collected heap (a la Scheme).

Michael (Haupt): would Alessandro's construct satisfy your needs? If
so, then under my proposal it would look like:

(define f
(let ((x 0)) ;; first stack-allocated, then forced into heap storage
(heap-lambda ()
(... (set x (+ x 1)) ...))))

Let me know if this would work for you,
--
Michael FIG <***@fig.org> //\
http://michael.fig.org/ \//
Michael Haupt
2008-04-05 16:52:37 UTC
Permalink
Hi Michael,
Post by Michael FIG
Michael (Haupt): would Alessandro's construct satisfy your needs? If
so, then under my proposal it would look like: [...]
I think it would; but in any case, I will also investigate Gavin's
"actives" proposal.

Best,

Michael
--
Dr.-Ing. Michael Haupt ***@hpi.uni-potsdam.de
Software Architecture Group Phone: ++49 (0) 331-5509-542
Hasso Plattner Institute for Fax: ++49 (0) 331-5509-229
Software Systems Engineering http://www.swa.hpi.uni-potsdam.de/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany

Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel
Kjell Godo
2008-04-07 13:51:09 UTC
Permalink
An active variable is a variable that expands when it is referenced or
assigned to. Now where would that be useful? Any examples?

What happens to the value of the variable when this expansion takes place?

Is code what goes into the variable on a reference?

Is this like in Scheme where variables can contain functions?

Can you give an example of how active variables are defined and used?
Or a link.

I looked up active variable but didn't get anything.
Post by Michael Haupt
Hi Michael,
Post by Michael FIG
Michael (Haupt): would Alessandro's construct satisfy your needs? If
so, then under my proposal it would look like: [...]
I think it would; but in any case, I will also investigate Gavin's
"actives" proposal.
Best,
Michael
--
Software Architecture Group Phone: ++49 (0) 331-5509-542
Hasso Plattner Institute for Fax: ++49 (0) 331-5509-229
Software Systems Engineering http://www.swa.hpi.uni-potsdam.de/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany
Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Kjell Godo
2008-04-07 13:52:29 UTC
Permalink
by reference I meant assignment.

is code what goes into the variable on an assignment?
Post by Kjell Godo
An active variable is a variable that expands when it is referenced or
assigned to. Now where would that be useful? Any examples?
What happens to the value of the variable when this expansion takes place?
Is code what goes into the variable on a reference?
Is this like in Scheme where variables can contain functions?
Can you give an example of how active variables are defined and used?
Or a link.
I looked up active variable but didn't get anything.
Post by Michael Haupt
Hi Michael,
Post by Michael FIG
Michael (Haupt): would Alessandro's construct satisfy your needs? If
so, then under my proposal it would look like: [...]
I think it would; but in any case, I will also investigate Gavin's
"actives" proposal.
Best,
Michael
--
Software Architecture Group Phone: ++49 (0) 331-5509-542
Hasso Plattner Institute for Fax: ++49 (0) 331-5509-229
Software Systems Engineering http://www.swa.hpi.uni-potsdam.de/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany
Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Michael Haupt
2008-04-07 14:00:13 UTC
Permalink
Hi Kjell,

are you actually asking me? I'm pretty new to actives, and know no
other place to look for them than the one Gavin suggested to look at.

Best,

Michael
Post by Kjell Godo
by reference I meant assignment.
is code what goes into the variable on an assignment?
Post by Kjell Godo
An active variable is a variable that expands when it is referenced or
assigned to. Now where would that be useful? Any examples?
What happens to the value of the variable when this expansion takes place?
Is code what goes into the variable on a reference?
Is this like in Scheme where variables can contain functions?
Can you give an example of how active variables are defined and used?
Or a link.
I looked up active variable but didn't get anything.
Post by Michael Haupt
Hi Michael,
Post by Michael FIG
Michael (Haupt): would Alessandro's construct satisfy your
needs? If
so, then under my proposal it would look like: [...]
I think it would; but in any case, I will also investigate Gavin's
"actives" proposal.
Best,
Michael
--
Software Architecture Group Phone: ++49 (0) 331-5509-542
Hasso Plattner Institute for Fax: ++49 (0) 331-5509-229
Software Systems Engineering http://www.swa.hpi.uni-potsdam.de/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany
Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
--
Dr.-Ing. Michael Haupt ***@hpi.uni-potsdam.de
Software Architecture Group Phone: ++49 (0) 331-5509-542
Hasso Plattner Institute for Fax: ++49 (0) 331-5509-229
Software Systems Engineering http://www.swa.hpi.uni-potsdam.de/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany

Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel
Gavin Romig-Koch
2008-04-08 23:59:27 UTC
Permalink
Sorry, I said I'd include an example of actives in my previous message,
then forgot to include it.
Post by Kjell Godo
An active variable is a variable that expands when it is referenced or
assigned to. Now where would that be useful? Any examples?
I didn't explain it well, see below for another attempt, and some examples.
Post by Kjell Godo
What happens to the value of the variable when this expansion takes place?
Is code what goes into the variable on a reference?
I think I answer these two question below.
Post by Kjell Godo
Is this like in Scheme where variables can contain functions?
No, though Coke/Jolt variables are exactly like Scheme variables in this
respect.
Post by Kjell Godo
Can you give an example of how active variables are defined and used?
Or a link.
I looked up active variable but didn't get anything.
Actives are not documented in the main coke paper (ie.
http://piumarta.com/software/cola/coke.html). I found them while
exploring the implementation of jolt, in Compiler.st. In trying to
understand how actives work, it's useful to know how Coke/Jolt syntax's
(for lack of a better name) work. Syntax's _are_ documented in that
paper, and there are working examples in the file syntax.k in the jolt
implementation. It is also helpful to understand how backquote works.
Backquote is also documented in that paper.

Syntax's are function-like in the sense that they only get expanded when
the appear at the front of a list. So if we define a bit of syntax like:

(syntax begin
(lambda (node compiler)
`(let () ,@[node copyFrom: '1])))

We can only use "begin" in the places where function call can go (at the
beginning of a form):

(begin
(printf "%d" 10)
(printf "and %d\n" 20)
)

Putting "begin" someplace else will result in "undefined: trans:begin"
error.

Actives, on the other hand, are variable-like in that can be placed
anywhere variables can be placed.

Actives are defined with an "active" special form. For example:

(active A
(lambda (name compiler) 'DUMMY)
(lambda (name compiler) '(addrof DUMMY))
)

(define DUMMY 0)

(set A 100)
(printf "%d\n" A)
; prints: 100

(set A 4)
(printf "%d\n" A)
; prints: 4

This (not very useful) active special form creates an active "A" which
simply defines a wrapper around "DUMMY". That is, assigments to A are
assignments to DUMMY, reads from A are reads from DUMMY.

The active special form takes one or two lambdas (functions) as arguments.

The _second_ lambda is called (at compile time) when the active is the
target of a set special form. It is called with the name of the active
and the current compiler, and it must return an expression who's result
after evaluation is the address of the memory location that the set
special form will change. If the active special form doesn't have a
second lambda, the active can't be used as the target of a set (it's
read only).

The _first_ lamba is called (at compile time) when the active is used in
any other place (is read/fetched from). It is also called with the name
of the active and the current compiler, but it must return an expression
whose value is used as the value of the active.

This next example is the same as the last with two printf's stuck into
each lambda to see when each lambda is executed, and when the code
returned by each lambda is executed:

(active B
(lambda (name compiler)
(begin
(printf "compile time read of %s\n" [[name printString]
_stringValue])
`(begin
(printf "exec time read of %s\n" [[(quote ,name) printString]
_stringValue])
DUMMY)))
(lambda (name compiler)
(begin
(printf "compile time set of %s\n" [[name printString] _stringValue])
`(begin
(printf "exec time set of %s\n" [[(quote ,name) printString]
_stringValue])
(addrof DUMMY))))
)


(set B 100)
; prints: compile time set of #B
; and prints: exec time set of #B
(printf "%d\n" B)
; prints: compile time read of #B
; and prints: exec time read of #B
; and prints: 100

(set B 4)
; etc
(printf "%d\n" B)
; etc




It's important to understant that these printf can be arbitrary code,
code which examines the current compiler, perhaps makes changes to it,
allocates memory, what ever. The following example creates "variables"
in malloc'ed memory, keeps track of this memory in a dictionary, and
uses an active to make the "variables" look like regular variables to
the rest of the code:

(define IdentityDictionary (import "IdentityDictionary"))
(define DICT [IdentityDictionary new])

[DICT at: 'A put: (malloc 4)]
(active A
(lambda (name compiler) '(long@ [DICT at: 'A]))
(lambda (name compiler) '[DICT at: 'A])
)
(set A 100)

[DICT at: 'B put: (malloc 4)]
(active B
(lambda (name compiler) '(long@ [DICT at: 'B]))
(lambda (name compiler) '[DICT at: 'B])
)
(set B 200)

(printf "A in DICT=%d\n" A)
(printf "B in DICT=%d\n" B)

(set A 40)
(set B 50)
(printf "A in DICT=%d\n" A)
(printf "B in DICT=%d\n" B)


This final example wraps the previous example up in a new syntax, so
that createing variables in malloc'ed memory is as simple as creating
normal variables:



(define DICT [IdentityDictionary new])
(syntax define-in-DICT
(lambda (node compiler)
(let ((active-name [node second])
(initial-value-expr [node third]))
`(all
[DICT at: (quote ,active-name) put: (malloc 4)]
(active ,active-name
(lambda (name compiler) '(long@ [DICT at: (quote
,active-name)]))
(lambda (name compiler) '[DICT at: (quote ,active-name)])
)
(set ,active-name ,initial-value-expr)
)
)))

;; all -- begin without creating a new scope
;; (not the best implementation)
(define _all
(lambda (node idx)
(if [idx >= [node size]]
`()
`((or ,[node at: idx] 1)
,@(_all node [idx + '1])))))

(syntax all
(lambda (node compiler)
`(and ,@(_all node '1))))



(define-in-DICT A 100)
(define-in-DICT B 200)

(printf "%d\n" A)
(printf "%d\n" B)

(set A 40)
(set B 50)
(printf "%d\n" A)
(printf "%d\n" B)

Hope this is helpfull.



-gavin...
Michael Haupt
2008-04-09 07:00:11 UTC
Permalink
Hi Gavin,
Post by Gavin Romig-Koch
[...]
Hope this is helpfull.
I think "enlightening" sounds more appropriate. Thank you so much! :-)

Best,

Michael
--
Dr.-Ing. Michael Haupt ***@hpi.uni-potsdam.de
Software Architecture Group Phone: ++49 (0) 331-5509-542
Hasso Plattner Institute for Fax: ++49 (0) 331-5509-229
Software Systems Engineering http://www.swa.hpi.uni-potsdam.de/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany

Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel
Michael Haupt
2008-04-05 16:51:02 UTC
Permalink
Hi Gavin,
Post by Gavin Romig-Koch
Post by Michael Haupt
the ultimate purpose would be the latter; changing the machine code
would then also be possible, but that's not what I want to do. It's
rather about approaching something like inline caches (other than
those already present in the id model).
Well, on modern hardware and modern OSs, I really don't think
nestling the inline caches amongst code makes sense - it plays havoc
with the cpu's memory cache lines, and disallows any use of the code
in a multi-threaded or multi-processed way. Much better, I think,
to allocate the caches out of thread local or process local memory.
that sounds sensible; but let me poke at the "havoc" thing a bit... I
have heard this stated informally several times. Is there some source
of related measurement information? Given that inline caching was
introduced to improve performance (and is still in use), it would be
interesting to see some actual benchmark results that nail this down.

Related question: does threaded interpretation still make sense these
days, what with all those sophisticated branch prediction units
around? Again: are there reliable sources?

Anyway, I'm trailing off. Sorry, but this is an interesting topic. :-)
Post by Gavin Romig-Koch
[...]
Your suggestions sound worthwhile, thanks a lot; I will have a look at
the places in the source code you mentioned. It seems you have
forgotten that "actives" example you announced, though. ;-)

Best,

Michael
--
Dr.-Ing. Michael Haupt ***@hpi.uni-potsdam.de
Software Architecture Group Phone: ++49 (0) 331-5509-542
Hasso Plattner Institute for Fax: ++49 (0) 331-5509-229
Software Systems Engineering http://www.swa.hpi.uni-potsdam.de/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany

Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel
Gavin Romig-Koch
2008-04-09 00:32:11 UTC
Permalink
Post by Michael Haupt
that sounds sensible; but let me poke at the "havoc" thing a bit... I
have heard this stated informally several times. Is there some source
of related measurement information? Given that inline caching was
introduced to improve performance (and is still in use), it would be
interesting to see some actual benchmark results that nail this down.
My knowledge of CPU (hardware) memory caches comes Ulrich Drepper's
paper on the topic:

http://people.redhat.com/drepper/cpumemory.pdf

There are probably other papers out there more specific to implementing
very late bound languages, but this isn't an area I've looked at much.
Post by Michael Haupt
Related question: does threaded interpretation still make sense these
days, what with all those sophisticated branch prediction units
around? Again: are there reliable sources?
Oh, by "multi-threaded" I meant multiple threads of execution running
the same machine code (as in POSIX threads), not threaded interpretation
(as in one of the ways to implement Forth like languages). If your
inline caches are changing the actual machine code, and multiple threads
are executing the same machine code, you can end up with race conditions
if your not very careful.

But you might find the answer to your question in Anton Ertl (and others
have done):

http://www.complang.tuwien.ac.at/projects/forth.html
Post by Michael Haupt
Your suggestions sound worthwhile, thanks a lot; I will have a look at
the places in the source code you mentioned. It seems you have
forgotten that "actives" example you announced, though. ;-)
Yes, I forgot, sorry. I've sent another note about actives.


-gavin...
Michael Haupt
2008-04-09 07:17:34 UTC
Permalink
Hi Gavin,
Post by Gavin Romig-Koch
http://people.redhat.com/drepper/cpumemory.pdf
that looks like a very interesting reference, thanks a lot.
Post by Gavin Romig-Koch
There are probably other papers out there more specific to
implementing very late bound languages, but this isn't an area I've
looked at much.
I presume that if inline caching was really that much of a bugger, it
would not be used at the degree it is in (probably amongst many
others) the VisualWorks Smalltalk VM, which was implemented and is
maintained by very smart people.

The id object model accompanying Ian and Alex' paper on the object
model has both global and inline caching, and having inline caching
alone improves performance more than having global caching alone - at
least that's what the very informal measurements I just ran yield.
Post by Gavin Romig-Koch
Post by Michael Haupt
Related question: does threaded interpretation still make sense
these days, what with all those sophisticated branch prediction
units around? Again: are there reliable sources?
Oh, by "multi-threaded" I meant multiple threads of execution
running the same machine code (as in POSIX threads), not threaded
interpretation (as in one of the ways to implement Forth like
languages). [...]
Ah, I was not at all about multithreading. I saw the question of
threaded interpretation in conjunction with the question of inline
caching. Both are mechanisms that were originally devised to help CPUs
in doing their jobs better without facing too many branch
mispredictions or cache misses, respectively.
Post by Gavin Romig-Koch
But you might find the answer to your question in Anton Ertl (and
http://www.complang.tuwien.ac.at/projects/forth.html
I know their work; unfortunately, there do not seem to be very recent
results produced on, say, Pentium IV CPUs. (For one paper that
appeared in 2003, they made measurements regarding branch prediction
and threaded interpretation, but those were run on a simulated MIPS.)
Best,

Michael
--
Dr.-Ing. Michael Haupt ***@hpi.uni-potsdam.de
Software Architecture Group Phone: ++49 (0) 331-5509-542
Hasso Plattner Institute for Fax: ++49 (0) 331-5509-229
Software Systems Engineering http://www.swa.hpi.uni-potsdam.de/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany

Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel
Bert Freudenberg
2008-04-09 14:36:05 UTC
Permalink
Post by Michael Haupt
Hi Gavin,
Post by Gavin Romig-Koch
http://people.redhat.com/drepper/cpumemory.pdf
that looks like a very interesting reference, thanks a lot.
Indeed.
Post by Michael Haupt
Post by Gavin Romig-Koch
There are probably other papers out there more specific to
implementing very late bound languages, but this isn't an area I've
looked at much.
I presume that if inline caching was really that much of a bugger,
it would not be used at the degree it is in (probably amongst many
others) the VisualWorks Smalltalk VM, which was implemented and is
maintained by very smart people.
This does not imply that if the VW OE was designed from scratch, taken
current CPU trade-offs into account, it wouldn't result in something
rather different. Also, it runs on a lot of CPUs so I am not sure to
what extent platform-specific optimizations are employed.

- Bert -
Michael Haupt
2008-04-10 07:49:42 UTC
Permalink
Hi Bert,
Post by Bert Freudenberg
Post by Michael Haupt
Post by Gavin Romig-Koch
http://people.redhat.com/drepper/cpumemory.pdf
that looks like a very interesting reference, thanks a lot.
Indeed.
plus a little follow-up from John Rose's blog: http://blogs.sun.com/jrose/entry/serious_and_funny_about_side
Post by Bert Freudenberg
Post by Michael Haupt
I presume that if inline caching was really that much of a bugger,
it would not be used at the degree it is in (probably amongst many
others) the VisualWorks Smalltalk VM, which was implemented and is
maintained by very smart people.
This does not imply that if the VW OE was designed from scratch,
taken current CPU trade-offs into account, it wouldn't result in
something rather different. Also, it runs on a lot of CPUs so I am
not sure to what extent platform-specific optimizations are employed.
It would be interesting to know.

Perhaps one can only find out by taking a (simple?) VM, implementing
all (or some of) the different caching and threading schemes, and do
some thorough HPM measurements. If only HPM measurements were
conveniently possible without those darn kernel patches... :-/

Best,

Michael
--
Dr.-Ing. Michael Haupt ***@hpi.uni-potsdam.de
Software Architecture Group Phone: ++49 (0) 331-5509-542
Hasso Plattner Institute for Fax: ++49 (0) 331-5509-229
Software Systems Engineering http://www.swa.hpi.uni-potsdam.de/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany

Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel
Gavin Romig-Koch
2008-04-01 15:15:09 UTC
Permalink
Post by Michael Haupt
is it somehow possible to have, in Jolt, the equivalent of the
following C code?
void f() {
static int x = 0;
...
x++;
...
}
There isn't anything built in to do this, but it's all extensible, you
could build something to do this. Look at "actives", I suspect that is
the easiest ways to build this sort of thing.

In case it isn't obvious to you, all a C compile will do with the above
is to do the equivalent of changing it to:

static int XXX_f_x = 0;
void f() {
...
XXX_f_x++;
...
}

where "XXX" is some sequence of characters that ensures the name won't
conflict with any user defined names.


-gavin...
Ian Piumarta
2008-04-09 18:27:14 UTC
Permalink
Post by Michael Haupt
Dear all,
is it somehow possible to have, in Jolt, the equivalent of the
following C code?
void f() {
static int x = 0;
...
x++;
...
}
(define Integer (import "Integer"))
(define calloc (dlsym "calloc"))

(syntax static
(lambda (node compiler)
[Integer value_: (calloc 1 [[node second] _integerValue])]))

(define f
(lambda ()
(let ((x (static 4)))
(incr (long@ x)))))

(printf "%d\n" (f))
(printf "%d\n" (f))
(printf "%d\n" (f))
(printf "%d\n" (f))
(printf "%d\n" (f))
Loading...