Boost logo

Boost Testing :

From: Martin Wille (mw8329_at_[hidden])
Date: 2007-03-24 13:33:24


David Abrahams wrote:

>> On Windows I go to the library's test directory and just run the
>> attached batch files; I'd expect these tools to work in the same way
>> on other platforms too.
>>
>> If the tester and the developer manage to agree to work on a problem
>> for an hour or two, they could test corrections almost in real time,
>> with a dramatically reduced turnaround time.
>
> I've gotten the impression that Martin doesn't feel comfortable
> dealing with bjam directly, but it would be great if I was wrong.

It takes me some time to figure out what command I have to run for each
particular request (this includes command line options, environment
variables, how not to disturb other test data, to check whether any
modifications from to previous requests have to be undone, etc)

The situation is relatively easy, when you have to care about only a
single library, like a library maintainer would do. In that position, a
few simple scripts like Nicola's do the job fine.

In this particular case, we had to localize the problem first (it was
even unclear whether the problems were caused by the test harness, the
build system, configuration errors on the test machine, or environment
problems on the test machine). For that, running the tests as they would
get run from regression.py is quite essential. Otherwise, we risk to
lose time by guessing based on partial or even invalid data.

A lot factors contribute to the unpleasant situation (listed without any
particular order):

o slow table generation (I have no idea whether that is a performance
problem or just configuration)

o long test runs. (slow machine here; machines are not available all
around the clock; a lot of memory (1.5G) and time (1 hour wall clock)
gets consumed by bjam; too many toolsets; too many tests (serialization))

o an after-freeze decision to fix problems for more exotic platforms
like Cygwin.

o time zones

o personal workload of the contributors

o using a rather complex test harness on top of a rather complex build
system working on platforms that can't be reproduced easily at a
different site.

o communication problems (e.g. Volodya had posted a message about
problems in how regression.py passes toolset names to Boost.Build.
Apparently, Dave didn't know about that a couple of weeks later)

Most of the points on this can't be addressed easily. I think we need to
take some time to think about them. We should do that before we start
1.35 release preparations.

Regards,
m

Send instant messages to your online friends http://au.messenger.yahoo.com


Boost-testing list run by mbergal at meta-comm.com