Boost logo

Boost :

From: Gennadiy Rozental (gennadiy.rozental_at_[hidden])
Date: 2005-02-07 13:57:04

> > Couple notes:
> >
> > 1. Next version of Boost.Test includes test cases timing.
> What is the status of this version?

It's already in cvs. It uses boost::timer for now, but by the end of this
round of changes I will implement high resolution timer.

> > 2. I do interested in high resolution timer for both Windows and
> > I
> > found couple places online and in this ML archive that have what I need.
> Would you consider submitting these as a separate high_res_timer
> mini-library for boost so that everyone can have easy access to them? It
> seems to me that these would be good additions to the Boost.Timer library?

It you check ML archive you will see that there were several attempts to
bring "better" timer into boost. I will keep my version in the utilities
section of Boost.Test for now. We could move it up later.

> > 3. I am interested in adding enhanced profiling abbilities to Boost.Test
> > or
> > somehow integrating standalone solution into Bosot.Test facilities. But
> > desing the integration I need to understand better what and how you are
> > trying to do.
> If you haven't already, please take a look at
> , hopefully this explains my goals. If
> have specific questions I will do my best to answer them.

Well, I've read you docs and code. Here is what I think:

Once I started looking through proposed design it immideatly stroke me that
you are making most common mistake in policy based designs (PBD): putting
everything into one single policy. IMO PBD is not about providing complete
implementation as template parameter (at least is not only about it). But
mostly about finding small reusable and orthogonal! policies that in
combination deliver flexibility and power. This is the reason why your
policy are templates and why you need to implement counting, reporting and
counting_and_reporting policies. These are orthogonal parts of profiler
functionality. No need to combine them into single policy.

  Also I believe from my expirience that profiler model you chose is way too
simple for real-life needs. Here is specific profiling modes I saw:

1. Take piece of code and wrap into timer. You cover this case
2. Take piece of code and time an average execution time during multiple
code invokation. You model couldn't do that
3. Frequently code you want to profile is "interrupted" with islands of code
that you are not interested in. You need an ability to do accumulation
4. Hierarchical checking points. Frequently you want to be able to see whe
whole picture and easily add check points somewhere inside. In a result you
want to see time from point A to point B then from point B to point C,
compined time from A to C and so on.

Here is how I think policies could be designed:

1. TimerPolicy
  Responcible for "timing implementation", including timer, it's resolution
e.t.c. Possible incarnations: BoostTimerPolicy, HighResolutionTimerPolicy,
2. CollectionPolicy
  Responcible for "what and how to collect" (elapsed time in one run,
average time in multiple runs, profiler name, location, relations). Possible
incarnations: SingleRunCollector. MultirunCollector, GlobalCollector e.t.c.
This policy may need to be separated into two.
3. LoggingPolicy
  Responcible for "how to report". Here where Boost.Test integration kicks
in. Possible incarnations: OstreamLogger, StdoutLogger, BoostTestLogger
4. ReportingPolicy
  Responcible for "when to report". Possible incarnations: ScopedReporter,
ExplicitReporter, ReportNone (to be used with GlobalCollector that perform
reporting itself)

and profiler definition would look like this:

template<typename TimerPolicy,typename CollectionPolicy,typename
LoggingPolicy,typename ReportingPolicy>
class profiler : public TimerPolicy, public CollectionPolicy, public
LoggingPolicy, public ReportingPolicy {

> Best,
> Christopher


Boost list run by bdawes at, gregod at, cpdaniel at, john at