|
Boost Users : |
From: Zeljko Vrba (zvrba_at_[hidden])
Date: 2007-06-30 02:32:35
On Fri, Jun 29, 2007 at 03:24:47PM -0500, Michael Marcin wrote:
>
> Whether it is matters or not is another question but you can look at
> generated code and determine if the compiler is doing a good job.
>
> For instance say I have:
>
Yes, you can determine that. But, IMHO, not by a fixed metrics which
computation can be automated by static analysis.
>
> Now if test_1 ends up calling a function for operator== or does any
> pushes onto the stack its not optimal and my_type and/or its operator==
> need to be fiddled with.
>
*OR* you need to fiddle with compiler options because inlining limit has
been reached.
>
> As I said before there is no reliable timing mechanism available and the
> process of compiling, installing, and running programs on this target
> cannot be automated AFAIK.
>
If the target (CPU+OS) uses a common CPU, you can acquire a machine with
the same CPU time and an OS that allows you to do proper profiling. If
the problem is with the CPU itself.. then I'm out of ideas. I would
personally go down the route of figuring out how to do empirical measurements
rather than static analysis.
As for static analysis - I'd begin with the list of "blacklisted" functions,
ie. those that MUST be inlined in a good code, and grep generated ASM for
calls to these functions. Simple (once you manually prepare the list) and
easily automated (fgrep). With modern CPUs and without input, anything else
is not a reliable indication of run-time performance (again, IMHO).
Oh, and read your compiler's docs :) Some compilers can generate optimizer
reports. Eg. Intel's compiler has options to report optimizer's actions
during compilation, and Sun's compiler has a seperate tool to analyze the
final executable (er_src) and report on inlining, loop transforms, etc.
Best regards,
Zeljko.
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net