Boost logo

Boost Testing :

From: Martin Wille (mw8329_at_[hidden])
Date: 2005-06-15 10:29:08


Douglas Gregor wrote:
> On Jun 14, 2005, at 9:56 PM, Martin Wille wrote:
[...]
> In truth, I'm actually hoping that we can add compilers/platforms after
> the release. Once we have a clean slate (= no unresolved issues), it's
> easier for us to keep it at a clean slate now that we get daily
> feedback on our changes. If someone expresses an interest in getting
> toolset X to work, we add it as a release compiler and clean up the
> mess.
>
> GCC 3.2.3 should be marked supported... there are only 2 failures
> different from GCC 3.3.6, and overall the compiler does very well and
> is used by many Linux distributions.

Hmm, OK, I'll keep 3.2.3 then.

> As for GCC 2.95.3... I never know what to do about that compiler. I've
> heard that it's still used by lots of people, but I haven't seen any
> evidence of that myself.

I've heard that, too. However, currently 66% of the tests are failing
for gcc-2.95.3 without STLport.

>>If you have spare resources then you could run intel tests if that
>>compiler is supported by intel for the Linux distribution you use.
>>There's no hassle involved, license-wise, in installing the Intel
>>compiler for testing Boost. Intel doesn't support the distribution I
>>use
>>and making Intel's install script work involves manual work for every
>>update and it is quite a bit of a hassle (e.g. it involves installing a
>>fake RPM database). If you could run the intel tests instead of me then
>>this would make my life easier and improve my testing throughput.
>
>
> Our Linux boxes run Gentoo, which is unfortunately not a supported
> distribution.

We're sitting in the same boat then; I use Gentoo, too.

                But, I'll check with our sysadmin; he might have some
> tricks up his sleeve to make things run more smoothly, and we still
> have one or two Linux systems that could also be doing nightly testing.

There are several issues to address:

1. The installation script uses RPM. Gentoo usually doesn't. The
prerequisite for icc aren't recorded in the RPM database. Easy to work
around by passing some flags to rpm to make it ignore missing prerequisites.

2. Intel's installation script checks for rpm by asking rpm for the rpm
rpm-package. Since Gentoo doesn't use rpm to install rpm, there is none.
Easy to fix by patching the test out of the install scripts.

3. If you maintain only a single version of the intel compiler then
things are somewhat bearable. However, when it comes to maintaining more
then one installation then administration becomes grim: apparently,
Intel's install script allows only one version to be present at a time.
As a result, you have to maintain separate RPM databases. I chose to
throw away the existing RPM database whenever I upgraded one of the
intel compilers. Then I reran all the install scripts of the base
package and the updates for the intel compiler. Then I renamed the
target directory and patched Intels wrapper script to deal with the new
directory names.

4. icc tries to locate the system headers. However, Gentoo installs them
at an unusual location and icc fails to parse the gcc
-print-search-dirs. Manual patching of the icc/icpc wrapper scripts is
required again.

I'm sure all of that can be scripted. However, it's messy and probably
not worth the effort.

There used to be support for icc built into Gentoo. However, it didn't
follow the updates as quickly as you would want it to. It didn't support
multiple compiler versions, either. I don't know the current state of
that support.

[Context: How to avoid needless attempts to run known-to-fail tests]
>>1. Add the "unsupported" information to the tests themselves, e.g. by
>>making them print "unsupported" (we could even add information about
>>what is unsupported: "unsupported(compiler)", "unsupported(bzlib)").
>>This would spare us some markup and the information provided would be
>>more detailed than what the manual markup currently gives us (e.g.
>>Spirit is marked unusable for gcc-2.95. Some parts, though, would work
>>on that compiler.)
>
>
> GCC does this by placing comments in the test files, e.g., "{ xfail
> i686-*-* }". Granted, their tests tend to be very different from ours.
>
>
>>2. Add another step to build procedure. That step would make the
>>information from Boost.Config available to bjam. This could be done by
>>writing a C++ program which writes a jamfile which gets included later.
>>This would enable a library author to turn the tests for, say, wide
>>character sets off when they aren't well supported by the environment.
>
>
> There's also the explicit-failures-markup, which contains the
> "unusable" information used by the reports. If bjam could grok that,
> we'd get the same results. In some ways that's easier, because one
> could write some XSLT to transform explicit-failures-markup into
> something bjam could read and use.

Yes.

However, I think using explicit-failures-markup makes the build
procedure a bit unclean. explicit-failures-markup is used *after*
building Boost and running the tests now. I don't think it is a good
idea to make the build procedure dependent on that file.

Explicit-failures-markup is a source of errors and needless redundancy
if you try to encode information like "doesn't run on systems that don't
support wide character strings well". This information is available for
free in the C++ program that is getting compiled. Wrapping the complete
test code into the according #ifdefs and providing the alternative

        std::cerr "unsupported(wide string support is insufficient)\n";

is rather easy. It would result in a runable binary and there would be
no further recompilations. The alternative shouldn't consume much
compile time compared to instantiating tons of templates and emitting
megatons of error messages.

I once posted another idea: let's make the test programs emit additional
XML output that gets included into the test_results.xml. We could use
that for easy passing of information about unsupported
toolsets/environment to the XSLT processing stage.

BTW, we should also distinguish between these two "unavailable" cases:

1. Is not supposed to work on the target system (classical known failure)
2. Is supposed to work, but is not tested because some resource is
missing on the testing system (e.g. Python, ICU, Spirit 1.6 or bzlib)

Putting that information into explicit-failures-markup would result in
mess, because we would not only have to distinguish between the toolsets
but also between the different machines which might run tests for the
same toolset.

Regards,
m
Send instant messages to your online friends http://au.messenger.yahoo.com


Boost-testing list run by mbergal at meta-comm.com