Boost logo

Boost :

From: David Abrahams (dave_at_[hidden])
Date: 2003-04-15 08:59:11


"Vesa Karvonen" <vesa_karvonen_at_[hidden]> writes:

> In the pure lambda-calculus, everything is a function. In other words,
> there are no built-in operators or data-types. Booleans, arithmetic on
> natural numbers, and even lists, are represented using functions. For
> example, the Church-booleans are represented as functions:
>
> #define TRUE LAMBDA(T,LAMBDA(F,T))
> #define FALSE LAMBDA(T,LAMBDA(F,F))
>
> Basically, TRUE is a function that takes two parameters (one by one)
> and returns the first parameter. To compute with Church-booleans, you
> apply them to functions. For example, logical conjunction can be
> defined as:
>
> #define AND LAMBDA(L,LAMBDA(R,((L,R),FALSE)))

IIUC, this can be rewritten more-understandably and lisp-ish-ly (if
not so formally) as:

     AND(L,R) := L(R,FALSE)

> Basically, AND is a function that takes two parameters, L and R, which
> are assumed to be Church-booleans. It first applies L to R,
> i.e. (L,R), and then applies the result to FALSE,
> i.e. ((L,R),FALSE). Assuming the parameters L and R were
> Church-booleans, the result will be another Church-boolean. For
> example:
>
> ((AND,TRUE),FALSE) ==> LAMBDA(T,LAMBDA(F,F)) = FALSE
>
> There are several different strategies for the evaluation, or
> reduction, of lambda-calculus terms. Most programming languages, use
> the call-by-value/applicative order/strict evaluation strategy, which
> basically means that parameters are reduced before application and
> that no reduction happens inside abstractions (functional
> values). Another strategy is the call-by-name strategy, which is one
> form of lazy evaluation.

<...snip...>

> The estimate that a nearly optimal interpreter might be just 2x-10x
> slower than the language in which the interpreter is implemented is
> based on the simple observation that such numbers hold for modern
> interpreters.

Heh, I'm having trouble with that extrapolation. For one thing, the
languages in which most interpreters are implemented are specifically
chosen for their ability to express the efficient and low-level
operations needed to make the interpreters go fast -- a property which
is manifestly not applicable to the preprocessor. For another, the
only interpreters I know of which begin to approach the 2x-10x ratio
are using some form of JIT compilation, which will never be possible
to express in PP (though perhaps the underlying PP interpreter itself
could use JIT).

> For example, it would not be difficult to provide native operators
> for arithmetic in the interpreter. The cost of arithmetic would then
> be the overhead of the interpreter plus the cost of native
> arithmetic. As native arithmetic, on the preprocessor, is already
> quite computationally intensive, interpreted arithmetic on the
> preprocessor might be just 2x slower than native arithmetic.
>
> What is the significance of all this?
>
> The point is that it is possible to implement an interpreter for a
> fairly expressive and easy to use programming language using the C
> preprocessor.

Isn't that what the Boost PP lib already is? It may not be as
"formally pure" as your toy, and it may leak a few implementation
details to the user, but it doesn't seem like a stretch to think of
it as a programming language unto itself. I certainly use it that
way, without understanding the details of the C preprocessor at a
deep level.

> Programming, using such an interpreted language, could
> be an order of magnitude easier than programming the native
> preprocessor. Unfortunately, the interpreter will be slower than the
> native preprocessor, which can and does limit its practical
> usefulness.

Sounds like the Boost PP library to me.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk