From: Gennadiy Rozental (gennadiy.rozental_at_[hidden])
Date: 2006-01-31 16:44:16
> > > and on
>>> Windows it tends to stand in the way of debugging by "handling"
>>> crashes as exceptions rather than invoking JIT or the debugger.
>> And as we discussed this is just a default that could be easily changed
>> manual testing (for example by defining environment variable if you tired
>> pass cla every time).
> It's just another thing to remember and manage.
No need to remember anything or mange. Just setup environment variable once.
> And then I have to manage linking with the right library
Again you either set it up once in your project file or even better rely on
> and read the Boost.Test documentation to figure out which calls and macros
> to use, etc
I am sorry: you do need to read documentation to use a library. Though I
believe 2-3 most frequently used tools you would learn quite quickly.
> Oh, and I also have to wait for Boost.Test to build
Why? You could build library once and reuse it or you could use inlined
> before I can run my own tests,
Even if you are using inlined version you still need to wait for it to be
parsed and compiled. And this is true for Boost.Test as well as for any
> and if Boost.Test breaks I am stuck.
And if Boost.<any other component you depend on> breaks you are not?
Actually Boost.Test is quite stable for a while now.
> So there are lots of little pitfalls for me.
It feels like some negative predisposition speaks here.
> I'm sure Boost.Test is great for some purposes, but why should I use
> it when BOOST_ASSERT does everything I need (**)?
It's just mean that you have very limited testing needs from both
construction and organization standpoints. And even in such trivial cases
Boost.Test would fire better: BOOST_ASSERT stops at first failure (is it?) -
BOOST_CHECK don't; if expression throw an exception you need to start a
debugger to figure out what is going on - using Boost.Test in majority of
the cases it's clear from test output. And I am not talking of much more
convenient other tools available.
> It seems like a lot of little hassles for no particular gain,
I think it's subjective at best.
> and I think that's true for
> 99% of all Boost regression tests.
And I think you are seriously mistaken.
> I'd actually love to be convinced
> otherwise, but I've tried to use it, and it hasn't ever been my
> experience that it gave me something I couldn't get from
> lighter-weight facilities.
Boost.Test was enhanced significantly in last two releases from usability
standpoint. Would you care to take another look?
> It's really important that the barrier to entry for testing be very
> low; you want to make sure there are no disincentives.
With latest Boost.Test all that you need to start is:
BOOST_AUTO_TEST_CASE( t )
// here you go:
Is this a hi barrier?
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk