|
Boost Users : |
Subject: Re: [Boost-users] [phoenix] V2, V3 and the amount of memory needed by the compiler to just include them
From: Thomas Heller (thom.heller_at_[hidden])
Date: 2011-02-24 06:29:00
On Thursday, February 24, 2011 12:10:17 PM Eric Niebler wrote:
> On 2/24/2011 10:26 AM, Hartmut Kaiser wrote:
> > Thomas and I found a nice and simple way to do the preprocessing to
> > Phoenix V3 which required a coupe of hours of hacking for it to be
> > added. The effect was significant (see attached figure). Therefore, I
> > can only encourage to add partial preprocessing to Fusion and Proto!
>
> I'm not sure I understand this graph, Hartmut. It's entitled, "Time to
> Preprocess Phoenix", and it plots "No PP" against "Using PP". What
> does "no pp" and "using pp" signify here?
No PP means no pre pre processing has been done (aka a regular compilation).
Using PP means that all the code has been preprocessed prior to the actual
compilation.
> Also, making the PP phase faster is only interesting if it is a
> significant portion of the overall compilation time.
The time for preprocessing the fusion/proto/phoenix headers stays constant
(if you don't increase the PP limit macros), it is about 2 seconds on my
machine. The time spent in the PP phase becomes less signifcant when the
actual expressions get more and more complicated, i.e. if we deal with TUs
that already take half a minute to compile.
However, I believe it is one (little) step in the direction of bringing
compile times down.
In Phoenix V3 Hartmut contributed PP code that lets us partially preprocess
the headers with Boost.Wave, which I think would be a great idea if we could
generalize it and deploy that technique in all PP heavy libs.
> I'd be more interested in plotting overall time, not just PP time.
>
> Thanks for doing this! I hope to steal your work for Proto.
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net