|
Boost : |
From: Vladimir Prus (ghost_at_[hidden])
Date: 2003-04-22 05:02:23
Gennadiy Rozental wrote:
>> What are the advandates of the second method, assuming all my tests will
> be
>> free-standing functions? Is there something that I loose by not switching
> to
>> that method? I'm really trying to understand if I should invest any time
> into
>> this.
>
>>From top of my head.
>
> 1. You will get separate statistic for every test case (number of
> passed/failed assertions
> 2. Above If test_a throw an exception, test_b will never be executed.
> With test cases, each one have separate execution monitor. So test_b will
> run even if previous test case failed with exception.
> 3. Next release you will be able to run specific test cases by name
> 4. test cases would IMO much more clear reflect you intention.
Ok, the point 4 is a bit subjective. 1 and 3 rise a question. I have two
kinds of tests: those which are run during rebuild and which are supposed
to always pass, and those which have something to do with functionality
(although make use of unit_test_framework).
For the latter kind, I use QMTest (http://qmtest.com), which runs each test,
shows results, allows to run each test/suite by name, etc. I wonder if
there's some overlap in functionality with points 1 and 3. I recall you've
recently added output of tests result in XML --- the facility QMTest has as
well.
So, do you think there's indeed overlap, and how much of it is desired? What
are future directions for Boost.Test?
- Volodya
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk