Boost logo

Boost Testing :

From: David Abrahams (dave_at_[hidden])
Date: 2007-12-10 14:46:00

Neither the current Bitten output nor Boost's current standard
regression testing tables provide an optimal UI for most Boosters'

Rather that jumping right into thinking about UI details, I thought
I'd open a discussion of what we need to be able to quickly learn by
visiting one or more testing webpages. I've started a list of
high-level questions (at the level of libraries, not individual test
cases) below; hopefully with your participation we can come up with a
fairly complete list, which I can post on a wiki page for reference
while we do actual page design.

I may be jumping the gun slightly, but as I write this down, it seems
to boil down to only a few distinct questions for a given revision or
revision range:

* Which unexpected failures were introduced and/or removed?
* Which library/platform combinations have unexpected failures?
* Which library/platform combinations have not yet been tested?
* Which of those combinations had unexpected failures when last tested?

We would want to be able to filter these results to:

* a set of libraries
* a set of platforms, including an easy way to say "release platforms"
* a repository revision range, including an easy way to say "HEAD"

For example, a natural "overall health of Boost" display might be
filtered to:

* all libraries
* all release platforms
* the latest repository revision

You could find out about the health of your own libraries by filtering
to that set.

Have I missed anything?

Dave Abrahams
Boost Consulting

Boost-testing list run by mbergal at