Boost logo

Boost :

From: Vesa Karvonen (vesa_karvonen_at_[hidden])
Date: 2003-04-16 02:30:06


David Abrahams:
>Vesa Karvonen:
>>The estimate that a nearly optimal interpreter might be just 2x-10x
>>slower than the language in which the interpreter is implemented is
>>based on the simple observation that such numbers hold for modern
>>interpreters.
>
>Heh, I'm having trouble with that extrapolation.

Well, I certainly hope that my extrapolation would not be too inaccurate. At
any rate, I'm confident that the interpreter would make preprocessor
programming much easier.

>For one thing, the languages in which most interpreters are implemented
>are specifically chosen for their ability to express the efficient and
>low-level operations needed to make the interpreters go fast -- a property
>which is manifestly not applicable to the preprocessor.

Note that I'm comparing the speed of the interpreter relative to the speed
of the C preprocessor. If the C preprocessor is slow, then the interpreter
will be slow as well. In my opinion, the C preprocessor actually is a rather
low level macro preprocessor, but it is irrelevant when one measures the
speed of interpretation relative to the host language (which is the C
preprocessor).

Consider the following Boost PP macro expression:

  BOOST_PP_ADD(99,100)

Consider how this would be expressed using an imaginary interpreter:

  PP_EVAL( plus(99,100) )

The 'plus' above is not a macro. It is an operator of the language
interpreted by the PP_EVAL macro. Consider how PP_EVAL would go about
evaluating an expression involving the plus-operator. Essentially the
interpreter analyzes the expression and determines that it needs to compute
the sum of 99 and 100. This analysis takes some amount of macro expansions.
After the analysis, it must compute the sum. The interpreter can then
compute the sum using the BOOST_PP_ADD-macro, which also takes many macro
expansions to evaluate.

The question is that how many macro expansions would it take from a fast
interpreter to reach the point where it can just use BOOST_PP_ADD. If the
amount of macro expansion required is not very high, it should be quite
possible to get reasonably close to the speed of the plain C-preprocessor
macro.

>For another, the only interpreters I know of which begin to approach
>the 2x-10x ratio are using some form of JIT compilation, which will
>never be possible to express in PP (though perhaps the underlying PP
>interpreter itself could use JIT).

It is true that it is not possible, for example, to define new macros during
interpretation. However, it is certainly possible to transform (==compile)
expressions into a form that is suitable for fast interpretation. For
example, suppose that the language would support pattern matching in all
binding constructs. For example, one could write the identity function as
`fn(X,X)' and a function that selects the first element of a pair as
`fn((A,D),A)'. Now, it takes time to analyze every binding construct to see
if it is a plain symbol or a tuple. However, instead of doing it again and
again each time the function is called, the interpreter could record the
information whether destructuring is required or not (e.g. by rewriting
expressions `fn(X,X)' -> `fn_simple(X,X)' and `fn((A,D),A)' ->
`fn_destruct((A,D),A)').

>>The point is that it is possible to implement an interpreter for a
>>fairly expressive and easy to use programming language using the C
>>preprocessor.
>
>Isn't that what the Boost PP lib already is?

No, I wouldn't call the Boost PP library an interpreter. Would you?

>It may not be as "formally pure" as your toy, and it may leak a few
>implementation details to the user, but it doesn't seem like a stretch
>to think of it as a programming language unto itself.

It can be difficult to differentiate between languages and libraries. The
following discussion is only meant to explain how the relationship between
the current Boost PP library and the kind of interpreter I'm talking about
here differ.

The way I see it, the relationship between the Boost PP library and the C
preprocessor is like the relationship between STL and C++. STL, in my
opinion, is a library. STL defines dozens of C++ functions (and classes).
You can do useful things with STL by combining those STL defined functions
in meaningful ways as C++ expressions. Essentially, when you write code
using the STL library, you are writing C++ code. The same holds for the
Boost PP library and the C preprocessor. The Boost PP library defines dozens
of C preprocessor macros. You can do useful things with the Boost PP library
by combining the macros defined by the PP library as C preprocessor
expressions. In other words, each STL expression is interpreted as C++ code.
Likewise, every Boost PP expression is directly interpreted as C
preprocessor code.

Consider the relationship between an interpreter for an embedded language
and the host language. A good example would be the relationship between
Guile and C. Guile is a Scheme interpreter implemented in C. Consider
implementing a function in Scheme and then using Guile to interpret that
function. I don't find it natural to interpret a C character array
containing Scheme code, passed to Guile for interpretation, directly as C
code. The character array is just a character array.

In contrast to libraries, which usually define dozens of functions, which
are then combined as complex host language expressions, the interface of a
simple embedded interpreter might have just one function: eval :: program ->
value. Most practical embedded interpreters do define other functions, but
the eval-function is central. Indeed, if your program never plans to call
the eval-function, it wouldn't make any sense to call any of those other
functions defined by the embedded interpreter.

Now, the kind of interpreter that I'm talking about here would be in a
similar relationship to the C preprocessor as Guile is to C. For example,
there is no LAMBDA() macro in the interpreter I posted earlier. Lambda-terms
are written as sequences of C preprocessing tokens. Only when you pass those
tokens to the interpreter, implemented as a C preprocessor macro, do they
get a meaning besides their C preprocessor interpretation as a sequence of
preprocessing tokens.

>I certainly use it that way, without understanding the details of the C
>preprocessor at a deep level.

It should be much easier to describe the semantics of the interpreted
language formally and understand them intuitively.

Let me present a much more complete picture of what the language accepted by
the interpreter might look like. I'll also present small example programs in
the language.

The following describes the syntax of the language in EBNF-notation (<...>
non-terminals, [...] optional, {...} *one* or more repetition).

   <expr> : fn(<pattern>,<expr>) ; function abstraction
          | fn_rec(<symbol>,<pattern>,<expr>) ; recursive function
abstraction
          | (<expr>{,<expr>}) ; function application (#1)
          | let(<pattern>,<expr>,<expr>) ; variable binding
          | let_rec(<pattern>,<expr>,<expr>) ; recursive variable binding
          | if(<expr>,<expr>,<expr>) ; conditional expression (#2)
          | match(<expr>,{(<pattern>,<expr>)}); pattern matching (#2)
          | and(<expr>,<expr>) ; logical conjunction (#2)
          | or(<expr>,<expr>) ; logical disjunction (#2)
          | quote(<pp-tokens>) ; quoting arbitrary pp-tokens
          | <op-1>(<expr>) ; unary built-in operator app.
          | <op-2>(<expr>,<expr>) ; binary built-in operator
app.
          | <op-3>(<expr>,<expr>,<expr>) ; ternary built-in operator
app.
          | pair(<expr>,<expr>) ; pair-expression
          | tuple(<expr>[{,<expr>}]) ; tuple-expression (#3)
          | <symbol>
          | <value>

<pattern> : _ ; introduces no binding
          | <symbol> ; value is bound to symbol
          | (<pattern>[{,<pattern>}]) ; tuple (#3)
          | <value> ; value is matched exactly

<symbol> : A | B | ... | Z ; A to Z

   <op-1> : not ; logical negation
          | is_0 | is_nil | is_pair ; tests
          | pred | succ ; arithmetic
          | arity | is_tuple ; tuples
          | length | fst | snd | rst ; lists
          | stringize ; arbitrary tokens

   <op-2> : eq | neq | lt | gt | le | ge ; comparison
          | plus | minus | times | div | mod ; arithmetic
          | get ; tuples
          | pair | at | map ; lists
          | cat | expand ; dealing with arbitrary
tokens

   <op-3> : set ; tuples
          | fold_left | fold_right ; lists

  <value> : <boolean>
          | <number>
          | nil

<number> : 0 | 1 | ... | 255 ; #4
<boolean> : true | false

Notes:
#1) Support for multiple argument calls would require C99 variadics.
#2) These special forms are lazy. The if-form only evaluates the condition
and one of the legs. The logical and- and or-forms evaluate the second
expression only if it is necessary. The match-form only evaluates the body
of the first clause whose pattern matches.
#3) Full-support for tuple-expressions and patterns would require C99
variadics. In the meanwhile, only pair-expressions and two element
tuple-patterns would be supported in the syntax described above. Other
tuple-patterns would require specifying the arity of the tuple. E.g. instead
of the expression `tuple(A,nil,3)' one would write `tuple_3(A,B,C)'.
Likewise, instead of the pattern `(K,V,L,R)' one would write
`tuple_4(K,V,L,R)'.
#4) Some high-precision number format should also be supported.

Note that most of the terminals, such as `plus' and `fold_right', except for
the symbols `A'-`Z', in the above syntax are in lower case. None of the
terminals would be implemented as macros. It is even possible to use C++
keywords, like `if' and `true', as the names of special forms.

I don't have the time to provide a formal defininition of the language
semantics right now. The semantics of most of the expressions should be
quite obvious. I'll informally describe the semantics along with the
examples.

The purpose of the fn_rec-expression is to allow the convenient definition
of simple recursive functions. For example, the factorial function might be
defined as follows:

  fn_rec(F,N,
         if(is_0(N),
            1,
            times(N,(F,pred(N)))))

Above, the symbol F would be bound to the function being defined. The
function can then refer to itself through F. Above, the function calls
itself recursively. A function defined like this could also return itself.
The language would have lexical binding.

To use the function, you evaluate an application using the interpreter:

  PP_EVAL( (fn_rec(F,N,
                   if(is_0(N),
                      1,
                      times(N,(F,pred(N))))),
            4) )
    ==> 24

You could basically build arbitrarily complex PP-programs without defining a
single C preprocessor macro. Of course, to avoid having to manually repeat
function definitions, they could be defined as simple token-like
preprocessor macros:

  #define FACTORIAL fn_rec(F,N, \
                           if(is_0(N), \
                              1, \
                              times(N,(F,pred(N)))))

And then used like this:

  PP_EVAL( (FACTORIAL,5) )
    ==> 120

An interesting feature of the interpreter would be the ability to expand
function-like C preprocessor macros. Such calls would be made using the
expand-operator. For example, a function similar to the BOOST_PP_WHILE-macro
could be defined like this:

  #define WHILE fn(P,fn(O,fn_rec(R,S, \
                                 if(expand(P,S), \
                                    (R,expand(O,S)) \
                                    S))))

You could then use it like this:

  #define MY_IS_0(x) BOOST_PP_IF(x,true,false)
  #define MY_DEC(x) (BOOST_PP_DEC(x))

  PP_EVAL( (WHILE,quote(MY_IS_0),quote(MY_DEC),tuple(5)) )
     ==> (0)

Or if C99 variadics are not supported:

  PP_EVAL( (((WHILE,quote(MY_IS_0)),quote(MY_DEC)),tuple_1(5)) )
     ==> (0)

Note the use of quote. Without the quotes, the interpreter would try to
evaluate (as expressions) the tokens MY_IS_0 and MY_DEC.

The interpreter would also make it possible to perform some manipulation of
preprocessing token sequences. For example, the following function would
first catenate a given list of tokens (or values) and then stringize it. If
the given list is empty, the empty string would be produced.

  #define CAT_LIST_STRINGIZE fn(L, \
                                if(is_nil(L), \
                                   quote(""), \
                                   stringize(fold_left(fn(S,X,cat(S,X)), \
                                                       fst(L), \
                                                       rst(L)))))

  #define A_MACRO a_macro

  PP_EVAL( (CAT_LIST_STRINGIZE,quote((A_,(MA,(CRO,nil))))) )
    ==> "a_macro"

  PP_EVAL( (CAT_LIST_STRINGIZE,nil) )
    ==> ""

Note that the use of the built-in cat-operator above is wrapped inside a
function abstraction. Built-in operators are really different from ordinary
functions and would have to be called using their own special syntax. Also
note that the macro A_MACRO gets expanded.

Pattern matching could be used for arbitrary destructuring of tuples. This
would allow defining functions that return multiple values. It would also
make it easy to manipulate complex data structures. For performance reasons,
it might be necessary to restrict pattern matching to the match-special
form. The let_rec-special form might also have to be restricted further.

I see no theoretical reason why it wouldn't be possible to define an
interpreter for a language like this. Actually, it shouldn't even be very
difficult. The only problem is that it might be very difficult to make it
fast enough.

-Vesa

_________________________________________________________________
Protect your PC - get McAfee.com VirusScan Online
http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk