From: Beman Dawes (bdawes_at_[hidden])
Date: 2003-06-17 11:59:47
At 11:38 AM 6/17/2003, Howard Hinnant wrote:
>On Tuesday, June 17, 2003, at 10:53 AM, David Abrahams wrote:
>> You might also infer how good a compiler's language support is in
>> some cases, but that takes a much more sophisticated view.
>Most pointy-haired managers and newbies are convinced they are
>sophisticated enough to use the results in this manner. I'm seeing
>this more and more often on the newsgroups:
>> Vendor X is <better than / comparable to / etc> Vendor Y according to
>> the boost tests!
>Here is an actual quote from comp.std.c++!
>> As for standard conformance , maybe you can check boost compiler status
>> page ( www. boost .org/compiler_status.html ).
>> That is by no means official conformance test ,
>> but I found the data listed very illustrative.
>I am quite convinced that I am not sophisticated enough to use the
>results in this manner without at least a couple of months work (if not
It used to take a lot of work to interpret the test results. There were a
lot of fails for each compiler, in spite of the fact that there were a lot
of workarounds in place. Some of the fails were caused by problems in Boost
code, and some by compiler errors. Very messy to interpret.
But that situation has changed. The latest releases from a number of
compilers need only a few (sometimes only one) of the BOOST_NO_ macros that
represent compiler deficiencies requiring workarounds. In addition to
implementing more of the language, these compilers also have less bugs.
Boost code has been improved, too. Nowadays you only need one hand to count
the Boost issues, and I hope those will be cleared in the very near future.
On the Win32 tests, which I follow most closely, we are very close to 100%
of the tests passing for a few compilers. For others, only a very small
number of tests will be failing. That makes it much easier to interpret the
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk