From: Daniel Berlin (dan_at_[hidden])
Date: 2001-06-08 22:47:27
"David Abrahams" <abrahams_at_[hidden]> writes:
> ----- Original Message -----
> From: "joel de guzman" <isis-tech_at_[hidden]>
>> > > 1. Why is compilation times bounded by lexical analysis?
>> > Sheer number of tokens to be processed. That is why most compilers can't
>> > afford to use table-generated lexers and end up using hand-crafted code
>> > instead.
>> > Well, at least until template instantiation came along as a compilation
>> > this was true ;-)
>> So lexers are basically of the form: t1 | t2 | ..... tn
>> in a loop while skipping white spaces?
> I don't understand what you wrote, which leads me to suspect that you didn't
> understand what I wrote. A token, to a lexer, is a character. A token, to a
> parser, is often made up of many characters. Usually, the lexer needs to
> process tokens that are not even a part of any parser token (whitespace,
> comments). Ipso facto, the lexer must process many more tokens than the
Unless the lexer never sees them, and you hand them directly to the
This is what most precompiled headers do.
Or just never show them to the lexer.
This is what apple's cpp-precomp does. It just only hands the lexer
what's really going to be used, rather than everything in the input.
> To unsubscribe, send email to: <mailto:boost-unsubscribe_at_[hidden]>
> Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
-- "Sorry, my mind was wandering. One time my mind went all the way to Venus on mail order and I couldn't pay for it. "-Steven Wright
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk