Boost logo

Boost Users :

From: Dave Steffen (dgsteffen_at_[hidden])
Date: 2005-04-06 10:37:42


Robert Mathews writes:
>
> "Dave Steffen" <dgsteffen_at_[hidden]> wrote in message
>
> > Expected failures look like failed unit tests where the number of
> > failed assertions is _less_ than expected. I use a Perl script
> > to look for this sort of thing. Personally, I'd be inclined not
> > to call these "failures", but that's just me.
>
> But, how would you know in a generic way that the number of failed
> assertions is _less_ than expected? I'd like a facility whereby I
> could ask the .exe that question.

 Well, that's why I use a Perl script. If you turn the various output
 levels up high enough, you get (at the end of the test suite output)
 a unit-test by unit-test summary of each test, how many assertions
 passed, how many failed, and how many were expected to fail; that's
 what I use Perl to chop up.

> > I've found that Perl (or whatever) scripts to grab the output of
> > test suites and do something reasonable with it are necessary. I
> > suspect that I haven't really grokked the _intent_ behind the
> > unit test library.
>
> Funny you would say that ... I currently work in test harness
> infrastructure of some 4000 tests written in perl. I'm looking at
> the boost tests stuff from the POV of having more unit-level tests
> for the individual libraries (most of the current stuff tests how
> the system works at an application level, and thus regression in
> individual libraries tend to show up as incredibly obscure issues,
> if they show up at all!). I'd like to have those tests be written
> in C++/STL/Boost (I'm really really sick of Perl). Still, reality
> is, I'd probably wrap this boost.test unit test programs in a
> standard perl wrapper so that it would fit into our current
> distributed test harness infrastructure. To do this, I need a way
> query the test program about what tests it might run if asked, and
> a way to pass configuration to those tests.

 One the one hand, I'm wondering if it wouldn't be possible to arrange
 for the test suite to supply the user with all this info via some API
 - say, a map of test names to test result structures, or some such -
 at which point you could arrange the output to look like whatever you
 want, all within the unit test suite.

 On the other hand, for some unit tests it's hard to avoid having
 various things output to the console. For example, I've got some
 unit tests that test our error handling code, including the bit that
 dumps messages to stderr and quits. Unless I build into our error
 handling code some way to redirect these error messages (and I don't
 really want to do this), this stuff is going to end up in the unit
 test's output, and I don't see any way around that. Thus, my current
 thinking: the output from running the unit test suite is A) saved and
 compared with 'canonical' output, and B) parsed for the information I
 want.

 I'm sure there's a way to get into the library code and get control
 of what it does on a lower level. If there isn't, Gennadiy can
 probably arrange for there to be. :-)

----------------------------------------------------------------------
Dave Steffen, Ph.D. "Irrationality is the square root of all evil"
Numerica Corporation -- Douglas Hofstadter
Software Engineer IV
                         "Oppernockity tunes but once." -- anon.


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net