From: Vesa Karvonen (vesa_karvonen_at_[hidden])
Date: 2003-04-21 12:44:44
>>It is definitely possible to get reasonable performance from
>>the interpreter. This is especially true when using built-in operators
>>of the language.
>Do you mean +, *, etc? How can you use those? I know you know they
>don't produce tokens!
By operators, I mean special syntactic constructs recognized by the
interpreter. Normal function application currently uses the syntax 'ap(f,p)'
('ap' is short for 'apply' #0 #1 #2). Operators, like 'times', which is used
for multiplication, uses the syntax 'times(l,r)'. If 'times' were a regular
function, it would currently be called like 'ap(ap(times,l),r)'.
#0) Technically, you could say that 'ap' is the operator for function
#1) I like to keep the syntax of the most fundamental constructs of the
language short. The reason is pragmatic. Once you have learned the basics of
the language, these fundamental constructs become entirely obvious to you -
every program you write uses them many many times. If the syntax is verbose,
it may help someone who looks at the language for the first time. If the
syntax is short, it helps to keep more complex programs readable. hmm... It
would also be possible to support both long and concise syntax at the same
#2) The syntax 'ap(f,p)' is currently chosen over '(f,p)', because 'ap(f,p)'
seems to be easier and faster to destructure - but I'll have to test this
>>For example, arithmetic is basically just as fast in the interpreter
>>as in the Boost.PP library (using BOOST_PP_[ADD|SUB|MUL]).
>Unsurprising, I guess.
Basically, yes. For the interpretation of built-in operators, the
interpreter basically has a certain fixed overhead, which is the time it
takes to analyze the expressions. After the analysis is done, the operators
execute at full speed. As long as the interpreter offers a well chosen set
of built-in operators, which match the computational needs of the users,
performance can be kept at reasonable levels. If the operator 'times' were
implemented as a recursive function, using primitive operations such as
'succ(x)', it would be much much slower.
Relatively, the slowest operators are the simplest operators. In Boost.PP,
BOOST_PP_INC(x) is very fast. In the interpreter, the corresponding
'succ(x)' operator is significantly slower, because the overhead of the
interpreter dominates. This means that it is important to have built-in
operators for basic computational processes such as mapping, folding, etc...
>OK. Not that I do all that much arithmetic...
It could potentially be more important. I could imagine building much more
complex program generators. For example, remember the lexer and parser
generators we discussed earlier. With fast built-in operators for maps and
sets (like maps and sets in STL), it just might be possible to write a
practical lexical analyzer generator. Arithmetic could be more useful in
other kind of program generators.
>What do you mean by "embedded interpreter"?
The term is probably not very good. I was thinking about embedded languages.
An embedded, or scripting, language is something that you can use to extend
(or program) a system basically without modifying the system itself (like
elisp in Emacs). I'll get back to this later.
>>, which makes the manipulation of algebraic or inductive data types,
>>such as syntax trees, relatively easy.
>This I've gotta see.
The basic technique is very simple. First of all, syntax trees can be
directly encoded in the same form that they are written externally. For
example, the preprocessing token sequence 'if(or(C,D),plus(X,Y),minus(X,Y))'
is the representation for the following syntax tree:
/ | \
/ | \
or plus minus
/ | / \ | \
C D X Y X Y
In order to destructure such syntax trees, one can define two macros for
each operator (if, or, plus, minus, C, D, X, Y,...). The first of these
macros simply extracts the operator tag or type:
#define STX_TAG_if(c,t,e) if // if is ternary
#define STX_TAG_or(l,r) or // or is binary
#define STX_TAG_plus(l,r) plus
#define STX_TAG_minus(l,r) minus
#define STX_TAG_C sym // Symbols are nullary
#define STX_TAG_false val // Values are nullary
// ... and so on for other operators ...
The second macro extracts the children (or "opens" the node):
#define STX_OPEN_if(c,t,e) c,t,e
#define STX_OPEN_or(l,r) l,r
#define STX_OPEN_plus(l,r) l,r
#define STX_OPEN_minus(l,r) l,r
#define STX_OPEN_C C
#define STX_OPEN_false false
// ... and so on for other operators ...
It is now possible to destructure the top node of a syntax tree quite easily
by using the token pasting operator:
#define FOO(syntax_tree) \
^^^^^ ^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^
#4 #1 #2
#define FOO_if(c,t,e) It's an if with children c, t and e
#define FOO_or(l,r) It's an or with children l and r
// ... and so on for all operators ...
Consider now what happens when we expand the C preprocessor expression
1. The underlined part expands to 'if'
2. The underlined part expands to 'true,0,1'
3. The underlined part expands to 'FOO_if'
4. The macro apply is simply defined as:
#define APPLY(m,p) m p
The idea is that it applies a computed function-like macro to a computed
parameter list. The use of APPLY (or some similar macro that essentially
ensures the correct order of macro expansion) is necessary, due to C
5. The entire expression then expands into: 'FOO_if (true,0,1)'...
6. ...which then expands into: 'It's an if with children true, 0 and 1'
As I was hopefully able to show, this is really quite simple.
Ok, as you can see, there are some macro identifiers with lower-case
characters, but I think that the technique is still quite safe, because the
prefix of the macro identifier (e.g. 'BOOST_ORDER_STX_TAG_') can be made
long and it will be highly unlikely that it would cause accidental macro
replacement. If the lower-case characters do cause trouble, it is possible
to get rid of them, but I think that the lower-case identifiers, like 'if',
make the code significantly more readable.
>>[tail-recursive,lexical binding,exit,eval,...] The interpreter also
>>directly supports continuations with the special form 'call_cc(f)'
>OK, stop right there. Somebody get this man into quarantine! You are
>one sick dude, Vesa!
;) Honestly, these features aren't nearly as difficult as they may sound. I
was quite surprised to see how easy they were to implement. The simplicity
of the interpreter makes all these features practically trivial.
>>[special support for exceptions]
>Once you have continuations, that stuff is easy.
True, exceptions could basically be directly implemented in terms of
continuations. However, continuations, in the interpreter (not in general)
are much less efficient in terms of space than direct support for
exceptions. I haven't really profiled/benchmarked these features, but the
continuation technique uses signficantly much more space and the exception
technique uses more macro expansions.
>Don't all programs use continuations? I thought that was the most
>fundamental control structure...
Yes, you can implement other control structures in terms of continuations.
You could say that all programs use continuations. However, not all
languages make it possible to manipulate continuations as first class
values. A continuation basically represents the remainder of computation
(waiting for a value). To simplify a bit, but not much, you could think that
in a running C++ program, a continuation would be represented by the program
stack and all cpu-registers. To capture a continuation, you could take a
snapshot of the state of the program stack (or stack of activation records)
and cpu-registers (including instruction pointer register). This is
basically how it works in the interpreter: the call_cc(x) special form
simply captures the current program state. It is very simple (and takes
relatively lot of space). (In real interpreters and compilers there are many
ways to optimize the capturing of continuations so that it is not necessary
to copy all activation records.)
>[...] When are you going to submit this sick piece of work for
>review? Can you put it in the sandbox?
> I was just wondering if or when you were planning on posting the code
>for this PP-evaluator for your extended lambda calculus?
>I look forward to tinkering around with it if u do ;-)
Yes, I plan to publish the code at some point. The evaluators for the pure
lambda-calculus are now basically done. It is still probably possible to
optimize them, but not unless the structure of the evaluators is changed
significantly (i.e. the stack machine would probably have to be replaced
with something even better suited to the task of interpretation).
The interpreter for the extended lambda-calculus, which I have named Order,
because it makes it easy to program with higher-order functions, is not yet
complete enough to be published - and probably won't be totally complete in
the near future. I'll try to get something complete enough for public
scrutiny in the next week or so.
I'm also very interested in seeing the Chaos library from Paul. I do not
really know the philosophy of Chaos to be sure, but I think that Order and
Chaos could benefit from and complement each other. The Order-interpreter
has been made possible by several discoveries in PP-metaprogramming by Paul
Order should be handy for expressing complex functions or the logic of
complex generators, but it isn't really designed for tasks such as
repetition of tokens or the construction of comma separated lists. Such
repetition constructs need to be provided outside of the Order-interpreter.
A variation of something like the BOOST_PP_FOR() macro could be designed,
that would directly support using Order-programs for the predicate and
operation. In this setting, you could think that the Order-interpreter is
embedded into the repetition construct.
Also, Paul has mentioned that Chaos implements efficient high precision
arithmetic. The high precision arithmetic of Chaos could probably be used
from the Order-interpreter, which could, for some parts, basically act as a
convenient high-level front-end to the efficient low-level facilities of
Chaos. But like I said, I don't know enough about Chaos to be sure.
Protect your PC - get McAfee.com VirusScan Online
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk