Boost logo

Boost :

From: Brian Braatz (brianb_at_[hidden])
Date: 2005-02-06 17:34:30


Further thoughts added below (which is why I would be responding to my
own post :) )

Including additional features \ design ideas to either add to Profiler,
or make as a separate library.

Additionally some thoughts on boost performance measurement of tests in
general by using these ideas.

> > -----Original Message-----
> > On Behalf Of Martin Slater
> > Subject: Re: [boost] Profiler Library Proposal
> >
> > christopher diggins wrote:
> >
> > > I have posted preliminary documentation and source code for a
> proposal
> > > for the boost profiler library at
http://www.cdiggins.com/profiler/
> .
> > > The code has been successfully compiled on Visual C++ 7.1 but not
> > > tested yet.
> > >
> > > There have been some updates to the code as a result of the
> > > suggestions made so far. Any further comments are appreciated.
> >
> > Being able to generate a heirarchicle recording / output would be
very
> > useful (the profiler we use at work does this) so you can drill down
> > into code areas that are running slow.
> >
> > cheers
> [Brian Braatz]
> Sharing some experiences in relation to where this thread looks like
it
> may go :)
>
> I am not proposing a library, just sharing with you a library I wrote
> and what the results were of building it.
>
> About a year ago, I played around with some similar concepts.
>
> I fundamentally wanted to "see what it would look like" if I could
> converge
> * Design By contract,
> * hierarchical logging
> * profiling
> Both by function and by cross cut concepts from a set of
> functions AND the operations IN those functions
> * reverse time debugging-
> I.e. you can go back in time and see how the call stack
> got you to where the error occurred. In addition to the previous call
> stacks that are now "gone".
> * Thread specific data tracking burned into the design
>
> By building the library, I got all of the above working. However many
> questions remained for me as to if the syntax required was "too much".
> This is still an open question in my mind in that I have not
personally
> decided either way.
>
> As you will see the "requirements" for a simple function are quite
> verbose.
>
>
> What I built:
> Simple syntax for both profiling and for logging of the
> "context" of a concept.
> Ability to grab the callstack of "where" an error occurred and
> what the code was doing when it hit an error.
> Ability to put the local variables and the parameters into a
> call stack
> Ability to plug in profiling in such a way as to view a
> cross\cut of the performance of a set of operations
> Also time profiling was done on EVERYTHING- so with a fancy XSLT
> modifer, you can cross cut and get whatever kind of "executive level
> analysis" you want on the sheer volumne of data you have
> Basically, to the application coders, what this looks like is an
> "Eiffel" style programming model in C++, with the behind the scenes
> advantage of
> Profiling
> Call stack tracing
> Heirarchcal logging-
> With context concepts embedded in the log so you can go
> back later and look at a "slice" of the performance of something. I
even
> built a debugger that let me look at the xml files and do this.
>
[Brian Braatz]

I was thinking about this for the last few hours. Primarily I was
thinking about how to get the advantage of the lib I described last
post, without having the pain involved in the app code with bulky
standards to conform to.

One of the problems with profiling is you have to, when you design the
library, decide how intrusive to be. When I did my experiment\bunny
trail, I found that there are *A LOT* of interesting things that can be
done, you just have to have an extremely intrusive library to do that.

It should be possible however to do the following.

1- Keep going on the profiler thinking. This is good

2- Build either a separate lib (Boost.Instrument?) or a separate piece
of Boost.Profiler which:

        * Is built on top of Spirit
        * Is capable of parsing C++ code and re-generating an
instrumented version of that
        * Is HEAVILY policy\trait driven in such a way as people can
"decide for themselves" what they want to tweak or add to in the
outputted instrumented files
                ** I.e. by modifying the policies for Boost.Profiler and
modifying the policies for Boost.Instrument you can "Have it your way :)
"
        * Is built part and parcel with the profiler libray, since what
the Instrument library is doing is delivering code which uses the
profiler library

Why I think this idea is a good one:
        * I can go buy Boundschecker. Or some other profiler \ debugger.
                ** I cannot "dig inside it and change it" (!)
                ** Too frequently these things give me TOO MUCH
information and not the right information I need. If I could modify how
they work , then I could tailor it to my own problems.
                
        * for boost, this means we could run a instrumented build of all
of boost and we could analyze\report on the different performance
charaistics of lib\os\compiler combinations

Part of this idea is inspired by the signal performance issue

Jody Hagins:
"
Are there any docs which describe the performance of the signal/slot
library? I was about to embark on a performance study because I want to
use it in a very high performance critical code path, but I thought I'd
ask if anyone else may have already done some work in this area.
"
http://tinyurl.com/5bpzq

I am sorry, I have to run, and I have run out of time. There was another
issue in regards to signals I recall where there looked like we had some
performance issues. I regret I have not yet found that mail in the
archive.

I would love the regression tests, to also be able to spit out
performance issues. This would also help me, as a user of boost, to
ascertain for time critical code which compiler I wanted to use and how
the boost libraries perform under X environment.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk