From: Beman Dawes (beman_at_[hidden])
Date: 2000-06-15 13:16:28
At 07:42 PM 6/15/2000 +1200, Mark Rodgers wrote:
>Perhaps we need to identify "champions" for as many platforms
>as we can. Then for the future these people could report back
>on the compatibility of each library with their platform (perhaps
>as part of the review process, or after major changes) and each
>library's documentation could have a reasonably comprehensive
>As long as the library author provides a comprehensive test suite,
>and as long as the champion is only expected to report back a
>success/failure rather than take responsibility for making it work,
>it shouldn't be too time consuming a task.
I have been thinking about the issue of boost libraries working with
various compilers from a slightly different perspective.
It would be good practice for me to run a regression test covering as
many boost libraries and as many compilers as possible before
updating the web site with modified libraries. Perhaps automatically
generate an HTML table for the web site with current compiler
It would be easy for me to cover the BCB4, BCC5.5, egcs, Metrowerks,
and Microsoft compilers on Win2K. A couple of other vendors would
probably donate compilers if asked. That leaves Unix/Linux and Mac
holes, but would at least catch the obvious bugs.
>I'm happy to do this sort of thing for BCB4 and BCC5.5.
What would really be a help is if you (or other members) could come
up with a test harness design to solve the n*m problem.
IOW, if there are 20 boost test programs which should be compiled and
run, and there are 12 compiler/std-library combinations, there will
be 20*12 == 240 compilations and tests. That's OK. What isn't OK is
to have 240 setups to create and maintain. Instead, a list of the 20
programs, and 12 separate setups for compilers. So adding a new test
program involves just adding to a list. Adding a new compiler
likewise just means adding a single compiler setup, regardless of how
many programs are to be tested. Setting up a new compiler needs to
be separate, so it can be done by someone familiar with that
It would also be nice, but not an absolute requirement, if 1)
programs without dependency changes didn't get recompiled, 2) any
scripting was portable (by using GNU, perl, or python tools for
example) so it could be used on other operating systems, 3) an HTML
table was generated reporting test results, 4) other suggestions from
experienced multiplatform testers were incorporated.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk