Boost logo

Boost :

Subject: Re: [boost] Abstract STL Container Adaptors
From: Andy Jost (Andrew.Jost_at_[hidden])
Date: 2013-03-14 17:46:33


Any other opinions out there?

-----Original Message-----
From: Boost [mailto:boost-bounces_at_[hidden]] On Behalf Of Neil Groves
Sent: Thursday, March 14, 2013 2:32 PM
To: boost_at_[hidden]
Subject: Re: [boost] Abstract STL Container Adaptors

On Thu, Mar 14, 2013 at 9:10 PM, Andy Jost <Andrew.Jost_at_[hidden]> wrote:

> From: Boost [mailto:boost-bounces_at_[hidden]] On Behalf Of Nevin
> Liber
>
> >> On 14 March 2013 12:20, Andy Jost <Andrew.Jost_at_[hidden]> wrote:
> >>
> >> In some cases, particularly when compile times are more critical
> >> than execution times, it would be preferable to let the caller
> >> choose the set implementation without losing the advantages of
> >> separate
> compilation.
> >
> > Do you have any benchmarks to show this isn't in the noise?
>
> What isn't in the noise? Compile times?
>
>
> >> It seems the implementation of this would be a straightforward
> >> application of type erasure. It seems too easy, in fact, which
> >> makes me wonder whether I'm missing something.
> >>
> >
> >*Every* operation becomes slow. Algorithms become really slow, if
> callable at all.
>
> I don't see how this is justified. The virtual call certainly adds
> overhead, but compared to a normal inlined function it's not a huge amount.
> Algorithms should become slower by a small constant factor.
>
>
Virtual functions calls add very appreciable overhead relative to the numerous small functions that would need runtime polymorphism for the idiom to work. The overhead is particularly severe with super-scalar processor architectures. To some this won't be important of course.

It is a justified comment since there have been many older solutions that used classic runtime polymorphism for containers and iterators. An example is the ACE containers and iterators. I believe it is widely believed that this design decision was sub-optimal. The runtime polymorphism on components such as STL containers would occur at very poorly chosen regions of the interface.

I suspect the real solution to your problem is to cluster together groups of classes tightly to the standard library classes, and have these loosely coupled together as components of your system. This is likely to yield much better compile-time improvements without sacrificing runtime performance.

>
> > For instance, if you had a "wrapper" that models a sequence
> > >container,
> what iterator category do you pick for it?
>
> The iterator category would exactly match that of the underlying
> container. The goal is *not* to abstract away the differences between
> different containers; it is to abstract away the differences between
> different implementations of the same container. So std::std and
> tr1::unordered_set and std::set<...,MyAllocator> can be used
> interchangeably, but not std::set and std::vector.
>
>
> >> In any case, is this a good idea?
> >>
> > I can't think of a case I've ever had for choosing the container at
> > run
> time.
>
> That's not the point. The aim is to compile the algorithm only once.
> As a real-world example (not to far from my scenario) say your project
> takes five minutes on 20 CPUs to compile, and the algorithm in
> question consumes less than 0.0001% of the overall execution time.
> Wouldn't you take a 10x hit in algorithm performance to improve the compile time?
>
>
No! My experience is that superior compile-time improvements are available by other means. It won't give you the best improvement in compile-time and it will impact runtime performance. This is definitely something we should discourage as opposed to supporting and implementing in my opinion.

My recommendations for increasing the speed of compilation are:
1. Consider the interface boundaries and the module coupling 2. Check the include directories since bad compile times can be due to an include path to a network folder or some other horror that is easily remedied but might go unsuspected.
3. Consider tools such as CCache, DistCC, Icecream 4. Consider changing compiler. The variation in compile-time performance is huge between different compilers 5. Consider compiler flags and build flags particularly to maximise use of cores. It is often best to exceed the number of processing cores due to the ratio of sleep to running times of the tasks.

> -Andy
>
>
Regards,
Neil Groves

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk