From: David Abrahams (dave_at_[hidden])
Date: 2006-05-13 11:55:11
"Gennadiy Rozental" <gennadiy.rozental_at_[hidden]> writes:
> "David Abrahams" <dave_at_[hidden]> wrote in message
>> "Gennadiy Rozental" <gennadiy.rozental_at_[hidden]> writes:
>>> "David Abrahams" <dave_at_[hidden]> wrote in message
>>>> "Gennadiy Rozental" <gennadiy.rozental_at_[hidden]> writes:
>>>>>> No, it's the other way round. The UB causes halts in regression
>>>>>> testing. I experienced dozens of these incidences. As I said,
>>>>>> I wasted days of CPU and human time on this problem.
>>>>>> How does not mapping a signal to exceptions and letting the
>>>>>> process die instead cause halt for regression testing?
>>>>> Because some compiler would show dialog window for example.
>>>>> Unfortunately there is no silver bullet here. One will have to
>>>>> deal with stalling regression tests one way or another. Which
>>>>> case has less incidents is an open question.
>>>> In the absence of other data, it seems to me that Martin's report
>>>> should be given more weight.
>>> What do you mean by "absence of other data"? I know for sure that
>>> several NT compilers will produce dialog window.
>> Hmm, maybe I misunderstood the argument. Isn't there a way of encoding
>> this information in the library and allowing tests to specify a
>> default mode, e.g.:
>> "By default, I am being run as part of an automated test suite and
>> should not stall the process"
>> "By default I am being run by hand..."
> I don't know about the default (how do you plan library would figure out
> whether it's run from regression run or by hand?), but users could specify
> how they want library to behave using either CLA or environment variable (or
> in config file starting next release)
I'm not sure whether it's the right kind of specification, though. Is
it? What I'm looking for is a specification that says, "run the test
so that a regression test is least likely to halt, whatever that means
on this particular system/compiler," not "run the test with signal
handlers that throw exceptions."
>> maybe this mode specification thing is even unnecessary, I don't know.
>> But if you know which platforms and compilers will benefit from
>> mapping signals, it seems to me you should only do it there.
> IMO all users on all platforms could benefit from signal catching (both in
> regression run and run by hand).
Clearly Martin was not benefitting.
> At the same time the same users need to understand the possibility
> (small IMO) of hung test in case of severe memory corruption.
This was apparently not a small probability for Martin. He said it
happened dozens of times.
> Now you propose to have different default behavior on different
> platforms. I would strongly oppose such inconsistency (And don't
> forget there other reasons mentioned in this thread to keep current
> default). Regression tools maintainers (and/or library developers)
> need to decide what they want to do on case by case (tool by tool or
> even better test/tool by test/tool) level.
I fundamentally disagree that that is the ideal. It's one of the
major goals of Boost.Build and the testing framework built around it
to centralize expertise about platforms, tests, compilers, etc., so a
library developer does _not_ need to be an expert in every platform,
compiler, etc. that his library may run under. One should be able to
use high level abstractions to describe what needs to be accomplished,
and allow the knowledge of platform-specifics embedded in the
framework to take care of the details. So far, we've been pretty
successful. However, if Boost.Test doesn't cooperate with that
approach, it will either undermine the effort, or we will have to stop
using it in the Boost regression tests.
> And unfortunately in some cases whatever you choose you still be
> exposed to the possibility of hung run (from either a deadlock or
> dialog message).
There's always a chance, just as sometimes you may need to learn
specific arcanities of a given compiler in order to build a library
there. The goal is to minimize those occurrences.
-- Dave Abrahams Boost Consulting www.boost-consulting.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk