Boost logo

Boost :

Subject: Re: [boost] [release] scope_exit, local_function, utility/identity_type, and functional/overloaded_function
From: Daniel James (dnljms_at_[hidden])
Date: 2012-05-01 04:07:11


On 1 May 2012 01:59, lcaminiti <lorcaminiti_at_[hidden]> wrote:
>
> Daniel James-3 wrote
>>
>>> For all of those, I've followed the checklist so I've run the code
>>> inspection tool, updated all docs, librariers.htm, and maintainers.txt.
>>
>> Looks fine, (at least it did when I looked last week). I did find the
>> way you force some tests to fail a bit odd, why don't you just let
>> them fail naturally?
>>
>
> Which tests?
>
> Some ScopeExit and LocalFunction requires variadic macros and C++11 lambdas
> so I used Boost.Config to detect if these are not supported and fail these
> tests with
>
> #error "variadic macros required"
>
> or
>
> #error "lambda functions required"
>
> I find these errors more readable than the ugly ones the libraries will
> generate otherwise. The point is that users should not use some library
> features at all without variadic macros or lambda functions (so if the
> libraries generate nasty errors when the regression tests check these
> features blindly on all compilers).

Sorry, that was more an aside than a big sticking point, but those
were the tests I was referring to. I'll try to explain. Although what
I actually do is suppress failures for C++11 features that aren't
available, so I'll first explain the reason I do that.

Consider a user of your library who runs the tests for every release
to check that there are no regressions. The first time they do that
they notice there are failures, check them, find they're an acceptable
failure and learn that test failures for your library are acceptable.
For the next few releases they continue to check them, but after a few
releases they will inevitably get less careful, which means that they
could easily miss an unacceptable failure. So for them the best thing
to do is to force the test to pass. They really just want a binary 'is
this okay' check to quickly and easily flag problems. Acceptable
failures defeat that purpose.

So... why not suppress these failures? A good reason might be to avoid
loosing information - the test might unexpectedly pass because the
information from the config macro is incorrect. You also don't want to
suppress a failure for a genuine problem that you are working on
fixing. Or maybe because a particular setup just isn't supported at
all, and the tests should fail.

What you're doing is using the test results as documentation. IMO test
results are not appropriate places for documentation (the tests
themselves can be useful as documentation). Ideally a user shouldn't
read test results, they just check that they passed. It's better to
list portability issues in the actual documentation.

Of course this is all my opinion, not an official policy. Basically it
boils down to the belief that a test failure should be an actual
problem. (I'll repeat for casual readers - this is about tests for
C++11 features which aren't available, not about failures due to
compiler/library bugs).


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk