Boost logo

Boost Testing :

From: David Abrahams (dave_at_[hidden])
Date: 2006-09-22 10:04:08

David Abrahams <dave_at_[hidden]> writes:

> Maybe we need a way to tell the system "stop
> reporting any failures from _existing_ regressions runs on this
> lib/test/whatever," so we only get reports from new regression runs.
> We'd need to make sure all those settings were cleared before release,
> of course.

Any opinions on this idea? I'm thinking of something like markup that
can be placed in explicit-failures-markup.xml that expresses the
date/time when a developer asserts a particular test/toolset is fixed.
Then perhaps just the unresolved issues report would exclude any
failure reports from before that moment. Once that report clears, we
can always look at the summary report to find any purported fixes that
haven't been verified by an actual test run.

Dave Abrahams
Boost Consulting

Boost-testing list run by mbergal at