|
Boost : |
From: Stefan Seefeld (sseefeld_at_[hidden])
Date: 2004-09-30 09:44:24
> From: Doug Gregor [mailto:dgregor_at_[hidden]]
> Sent: September 30, 2004 10:30
> > I should mention that the design of the C++ parser allows
> to manipulate
> > the generated parse tree in-memory and write it out into a
> file again
> > in
> > a non-lossy way, i.e. it could indeed be used as a source-to-source
> > compiler.
>
> That's an excellent goal, and you have me interested. It's a
> huge task,
> requiring a great deal of expertise (collectively, we have that) and
> will require a lot of time. I think it would be wise to attempt to
> isolate some of the interesting-but-disjoint problems early,
> so we can
> put out a "request for libraries" of some sort. Things that come to
> mind quickly: a unification algorithm (for template
> instantiation/partial ordering), a tree
> manipulation/rewriting library
> (for AST transformations), and a flexible symbol table library.
in what sense do you use the term 'library' here ? While I fully
agree that is is very useful to approach this big task by splitting
it into modules, I doubt it is feasible or even useful to use these entities
as separate deployment units.
However, I think we can come up with a high level design that respects
the data flow requirements (the complex interactions between the parser
and the symbol lookup table notably) yet is flexible enough to let the
individual modules evolve in parallel. In fact, I have good hope
that I could use the same framework to analyze C code (something I hope
the Free Software community around GNOME / GNU will benefit from) by
using some form of polymorphism in the lexer, the parser, and the symbol
lookup.
Regards,
Stefan
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk