Boost logo

Boost :

Subject: Re: [boost] Abstract STL Container Adaptors
From: Nevin Liber (nevin_at_[hidden])
Date: 2013-03-14 18:13:47


On 14 March 2013 16:10, Andy Jost <Andrew.Jost_at_[hidden]> wrote:

>
> > Do you have any benchmarks to show this isn't in the noise?
>
> What isn't in the noise? Compile times?
>

Yes. I also don't see how type erasure necessarily reduces this, since a
bunch of virtual functions have to be generated at the time you assign the
concrete underlying container to the type erased holder, even if you never
use any of those calls.

Heck, I still don't see how you avoid the templatng issue you seem to be
concerned about, since you have to be templated on the value the container
is holding. (Or do you plan on type erasing that too?)

> For instance, if you had a "wrapper" that models a sequence >container,
what iterator category do you pick for it?

The iterator category would exactly match that of the underlying container.

Um, std::set has bidirectional iterators, while std::unordered_set only has
forward iterators. At this point, I really have no idea what you mean by
your above statement.

> The goal is *not* to abstract away the differences between different
> containers; it is to abstract away the differences between different
> implementations of the same container. So std::std and tr1::unordered_set
> and std::set<...,MyAllocator> can be used interchangeably

Um, std::set and std::unordered_set are not different implementations of
the same container. They have *lots* of differences; besides the ones
already mentioned, there is comparison of containers, iterator invalidation
rules, etc.

> That's not the point. The aim is to compile the algorithm only once.

That may be your aim; my aim is to reduce both run time and compile times
of my code.

> As a real-world example

Where is the code for this real-world example?

> (not to far from my scenario) say your project takes five minutes on 20
> CPUs to compile, and the algorithm in question consumes less than 0.0001%
> of the overall execution time. Wouldn't you take a 10x hit in algorithm
> performance to improve the compile time?
>

Let's see the code which has this behavior.

-- 
 Nevin ":-)" Liber  <mailto:nevin_at_[hidden]>  (847) 691-1404

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk