From: Vladimir Prus (ghost_at_[hidden])
Date: 2005-09-20 10:26:11
Kevin Wheatley wrote:
> David Abrahams wrote:
>> On the Boost.Build list we were just discussing the fact that some
>> people otherwise inclined towards Boost have chosen Scons over
>> Boost.Build. It would be useful for us to understand some of the
>> reasons why, if some of you wouldn't mind letting us know. No flames,
> not a flame, but some test results (Noel Llopis' Blog) shows a few
> interesting results:
> http://tinyurl.com/7fdns (www.gamesfromwithin.com)
> These are updated from previously.
1. They are skewed a bit.
2. They don't represent much
1. They are skewed, because performance measurements is hard. The only way
to know for sure is to test on your real data, everything else needs
justification that the test is adequate. For example, the specific test has
a lot of files with the same name, which hits specific non-scalability in
V2. Originally, test required 33 seconds on my system. A one line fix
removed 6 seconds from that time (-20%).
2. The don't represent much, because little time was invested in
optimisation. After all, I can rewrite critical parts in C, for a 100x
speedup of those parts.
Just today, spending something like an hour I brought the running time from:
Real time: min 34:400, max 37:60, avg 35:412
User time: min 33:350, max 35:230, avg 33:832
Real time: min 25:670, max 29:390, avg 26:838
User time: min 24:750, max 27:860, avg 25:644
(bjam -n, average over 5 runs)
That is -25% for real time.
Certainly, more improvements are possible, but at the moment V2 appears to
be faster that V1, and the primary priority is ease-of-use, so that we can
switch to V2 for Boost.
I'd surely be willing to speed things up, but those folks did not even
contact us about their findings (and my email to their mailing list is
still hanging in moderation queue). I'm not sure they contacted SCons
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk