From: Martin Schulz (Martin.Schulz_at_[hidden])
Date: 2007-04-02 15:04:08
> Please look at the first line in the Boost Design and
> Programming Guidelines :
> It may be that some compilers optimize the current code
> better than others and that some do very poorly. Also in
> accordance with the aforementioned guidelines we have not
> focused primarily on performance or performance testing.
Thats fully okay. I never said the performance is to slow to be
usable. I never said that it should not be included because of
the observed drop in some cases. I did even not mention any
performance aspect in my initial review.
It is just the noncontradicted repetition of the "zero-overhead" keyword
over and over again, that irritates me. This indicates that
performance is not so unimportant anyhow.
The factors of 15 or such is clearly measurement noise as I already
in fact the whole example is little meaningfull. It can't prove that
the library is fast, nor that it is slow. We can observe that it can be
zero-overhead in some cases and may involve overhead in other cases that
are quite nearby. It is this "discontinuous behaviour", that makes
things difficult sometimes.
After all, a factor between zero an three is no big deal, if it
applies to moderate number of arithmetic operations only.
You should know that some unsuspicious conditional statements in a loop,
a cache miss or other kinds of data dependencies may already hide this.
In essence, I would qualify that library as "zero- to low-overhead".
And, I must add, I cannot hope to make it any better.
> There is no fundamental reason why the code should not be
> able to be optimized away,
Yes, there is no fundamental reason. It is just the "compilers
I currently have at hand" as I wrote, do produce mixed results.
It appears as if I cannot just use the library and rely on
that my compiler will produce zero-overhead.
Such a claim would be too ambitious.
> Finally, if a
> nominally compile-time unit system incurs as much overhead as
> you seem to believe, imagine the cost of a runtime system...
Again, you are completely right. Did I suggest that a runtime
system would be faster? I have to apologize then. I tried to convice
you (and anybody else) that it would be better to have a dynamic system
that is closely integrated with the static one (and vice-versa).
A dynamic system enables to create additional benefits, going far
beyond what a static unit system can do. I simply cannot follow
the one-fits-all attitude.
Consequently, I opt against rational exponents, as I consider them to be
an obstacle for such an integration.
Furthermore, I tried to point out that inside of computation kernels
(thats where performance really matters), you probably go without any
Partly because numerical libraries of all sorts do not support units
anyway, partly because I will try to avoid any uncertainties or compiler
idiosyncrasies that might get into my way.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk