|
Boost : |
Subject: Re: [boost] BOOST_PP array extension proposal
From: Matt Calabrese (rivorus_at_[hidden])
Date: 2015-09-10 19:58:17
On Thu, Sep 10, 2015 at 4:15 PM, Edward Diener <eldiener_at_[hidden]>
wrote:
>
> I admit I have never done any benchmarking of preprocessor code. This is
> not only compile time code but is preprocessor code, which occurs pretty
> earlier in the compilation phases. So I have never thought very hard and
> long about how one would measure the time spent by the compiler in macro
> expansion depending on whether you use one Boost PP construct versus
> another. Any thoughts about how anybody could accurately benchmark such
> time would be most welcome.
>
My experiences are anecdotal, so I don't want to make precise claims, I'm
just raising this as something to consider and it might be necessary to
benchmark before making too many recommendations. When I was working on
Boost.Generic I at one point reached a blocking point where preprocessing
was consuming so much memory that I'd run out of address space (32-bit)! I
just couldn't proceed when dealing with complicated concepts until I
revised how I did my repetition, which brought down the memory usage
considerably (switching between several disparate fold operations with
small states and a single fold operation with a large state is one change
that I remember vividly). I imagine that looping constructs are always more
directly the culprit for these types of issues, though if you are deep
inside of some repetition I wonder if even the difference of tuple and
array can have noticeable impact, especially for a large number of
elements. I really don't know as I've never really analyzed the problem or
done rigorous profiling of these types of things, but I've stopped making
too many assumptions as I've been bitten before. It also could be pretty
compiler-dependent as well.
> I have no doubt manipulating tuples are probably slower than manipulating
> arrays when variadic macros are being used, since calculating the tuple
> size is slower than having it there as a preprocessor number.
To be clear, I'm not strictly sure about even that even though my initial
intuition was to prefer array, I'm just hesitant to state for sure that
recommending a preference of tuple is necessarily the best recommendation
or the best default. Some operations are probably simpler for tuples (I.E.
I'm imagining that joining two tuples together is probably faster than
joining two arrays, since you can just expand both by way of variadics
without caring about or having to calculate the size of the result). There
could even be no or minimal measurable difference in all practical cases.
I've just in practice seen surprising behavior of implementations during
preprocessing that I wouldn't have expect had I not seen it happen. Memory
usage in particular tends to be surprisingly intense during preprocessing
(surprising to me, at least), especially if tracking of macro expansions is
enabled in your compiler.
I should have specified that "phase out the use of Boost PP arrays" does
> not mean that they will ever be eliminated from Boost PP AFAICS.
>
Okay, that's good, then.
I am pretty sure I know Paul's stance since he was the one who mentioned to
> me that with the use of variadic macros the Boost PP array is "obsolete".
I usually assume whatever Paul suggests is best when in this domain, so my
paranoia could be unfounded here.
-- -Matt Calabrese
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk