From: David Abrahams (dave_at_[hidden])
Date: 2003-04-21 06:12:44
"Vesa Karvonen" <vesa_karvonen_at_[hidden]> writes:
> Vesa Karvonen:
> I have now implemented a subset of the language I described in my
> previous post. It turns out that my extrapolation is not completely
> It is definitely possible to get reasonable performance from
> the interpreter. This is especially true when using built-in operators
> of the language.
Do you mean +, *, etc? How can you use those? I know you know they
don't produce tokens!
> For example, arithmetic is basically just as fast in the interpreter
> as in the Boost.PP library (using BOOST_PP_[ADD|SUB|MUL]).
Unsurprising, I guess.
> Of course, it is possible to implement faster arithmetic on the
> preprocessor than the current Boost.PP implementation. Even with
> much faster arithmetic, interpretation speed, for simple arithmetic,
> should still be well within the 2x-10x range.
OK. Not that I do all that much arithmetic...
>>>For one thing, the languages in which most interpreters are
>>>implemented are specifically chosen for their ability to express
>>>the efficient and low-level operations needed to make the
>>>interpreters go fast -- a property which is manifestly not
>>>applicable to the preprocessor.
> Actually, it is true that the C preprocessor has been intentionally
> crippled, but it has the basic characteristics of an almost ideal
> language for the implementation of *embedded* interpreters.
What do you mean by "embedded interpreter"?
> The main reason is that such an interpreter can relatively easily
> manipulate C preprocessor data (including macro identifiers and
> macro parameter lists). Memory management is fully automatic. It is
> also relatively easy to express limited pattern matching using the C
> , which makes the manipulation of algebraic or inductive data types,
> such as syntax trees, relatively easy.
This I've gotta see.
> An interesting feature of the interpreter is that it implements a wide
> range of control structures. First of all, the interpreter is
> essentially tail-recursive, which means that recursive loops do not
> "blow the stack". Like the previously implemented evaluators for the
> lambda-calculus (BTW, I now have much more efficient versions of those
> evaluators than the versions I posted earlier to the list), the
> interpreter also has lexical binding and anonymous functions. The
> interpreter has a special form 'exit(expr)', which immediately stops
> the interpreter and expands to the evaluated value of 'expr'. ('exit'
> can be used for producing readable error messages when a program
> detects an error (e.g. in an assertion).) The interpreter also
> directly supports continuations with the special form 'call_cc(f)'
OK, stop right there. Somebody get this man into quarantine! You are
one sick dude, Vesa!
> (exactly like call/cc in Scheme). I'm currently finishing the
> implementation of direct support for exception handling
> (i.e. 'try(expr,handler)' and 'throw(expr)' special forms). The
> interpreter also has a special form 'eval(expr)' (exactly like eval
> in Scheme).
Once you have continuations, that stuff is easy.
> It has been very easy to implement these features in the
> interpreter. For example, the support for continuations requires
> only a couple of macros - and it has zero performance hit on
> programs that do not use continuations.
Don't all programs use continuations? I thought that was the most
fundamental control structure...
> The same holds for exception handling. It would probably be
> practically impossible to provide similar constructs (e.g. exit,
> exception handling and continuations) in full generality in a C
> preprocessor combinator library like Boost.PP.
Probably. When are you going to submit this sick piece of work for
review? Can you put it in the sandbox?
-- Dave Abrahams Boost Consulting www.boost-consulting.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk