Boost logo

Boost :

Subject: Re: [boost] Interest in updated expression template library?
From: Zach Laine (whatwasthataddress_at_[hidden])
Date: 2017-01-11 09:48:17


On Wed, Jan 11, 2017 at 2:00 AM, Larry Evans <cppljevans_at_[hidden]>
wrote:

> On 01/10/2017 04:22 PM, Mathias Gaunard wrote:
>
>> On 10 January 2017 at 20:07, Zach Laine <whatwasthataddress_at_[hidden]>
>> wrote:
>>
>>
>>> I agree with all of these complaints. This is in fact why I wrote Yap.
>>> The compile times are very good, even for an obnoxious number of
>>> terminals
>>> (thousands), and there is no memory allocated at all -- or did you mean
>>> compiler memory usage? I haven't looked at that.
>>>
>>>
>> Compiler memory usage of course.
>> When a TU takes 4GB to compile or more, it leads to lots of problems, even
>> if RAM is cheap and you could put hundreds of gigabytes in your build
>> server.
>>
>
> Zach, w.r.t. compile-time benchmarking,
> Louis has a compiler benchmark library here:
>
> https://github.com/ldionne/metabench

Yes. It's quite nice.

> I did have a brief look at it; however, I didn't see an easy way
> to vary parameters of the benchmark, for example, the size of
> expressions, and compare the results (it uses embedded ruby, and
> I couldn't figure out from that how to do what I want).
>
> Instead, I resorted to gmake for loops, as shown here:
>
> https://github.com/cppljevans/spirit/blob/get_rhs/workbench/
> x3/rule_defns/Makefile#L97
>
> resulting in output shown here:
>
> https://github.com/cppljevans/spirit/blob/get_rhs/workbench/
> x3/rule_defns/bench.tmp
>
> Interestingly enough, the method that performed worse was the
> one (RULE2RHS_CTX_LIST) which stored the rule2rhs information
> in the context. In contrast, the GET_RHS_CRTP stored this
> information by overloading functions generated by macros.
> What's interesting is that, based on what Brook Milligan says
> in his message, proto uses a context as well, but your library
> uses a set of function overloads.
>
> With regard to the gmake method of comparing compiler times,
> I realize that's sorta kludgy, and, years ago, I used a series
> of python programs to do the equivalent. I you find the
> gmake-for-loop method unacceptable, I can try to find the
> python method instead.
>

I haven't done much compile time benchmarking for Yap. It's fast enough
that I haven't felt the need. If people start using Yap and noticing long
compile times, I'll certainly do my best to fix that.

Zach


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk