From: David Abrahams (abrahams_at_[hidden])
Date: 2001-06-08 22:28:51
----- Original Message -----
From: "joel de guzman" <isis-tech_at_[hidden]>
> > > 1. Why is compilation times bounded by lexical analysis?
> > Sheer number of tokens to be processed. That is why most compilers can't
> > afford to use table-generated lexers and end up using hand-crafted code
> > instead.
> > Well, at least until template instantiation came along as a compilation
> > this was true ;-)
> So lexers are basically of the form: t1 | t2 | ..... tn
> in a loop while skipping white spaces?
I don't understand what you wrote, which leads me to suspect that you didn't
understand what I wrote. A token, to a lexer, is a character. A token, to a
parser, is often made up of many characters. Usually, the lexer needs to
process tokens that are not even a part of any parser token (whitespace,
comments). Ipso facto, the lexer must process many more tokens than the
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk