|
Boost : |
From: Jody Hagins (jody-boost-011304_at_[hidden])
Date: 2006-02-01 17:42:27
On Wed, 01 Feb 2006 15:47:46 -0500
David Abrahams <dave_at_[hidden]> wrote:
Sorry for the length, but I thought I should at least try to give you my
opinion in more than "I like it because I like it" terminology.
> > The framework allows for easy test-writing.
>
> Easier than BOOST_ASSERT?
Everything is relative. I think it is as easy to use, and provides more
flexibility. When I start a new test file, I can hit a macro key in vim
and I get my bost-test starter code inserted immediately. From then on,
I just write tests.
There are lots of different "checks" that can be made, and a ton of
extra stuff for those complex cases.
> That's all build system stuff and has nothing to do with the library.
> I do
>
> bjam test
>
> and get the same result.
Yes. I was just saying that because I have said before that I do not
use bjam, except to build boost, and I wanted to emphasize that setting
up the make environment was more complex than using Boost.Test.
> > When I need more tests, I can simply
> > create a new .cpp file with the basic few lines of template, and
> > start inserting my tests.
>
>
> #include <boost/assert.hpp>
>
> int main()
> {
> ...
> BOOST_ASSERT( whatever );
> ...
> BOOST_ASSERT( whatever );
> }
If all your tests can be expressed in BOOST_ASSERT, then you may not
have need for Boost.Test. However, to use the Boost.Tests, it is not
much more work...
#define BOOST_AUTO_TEST_MAIN
#include <boost/test/auto_unit_test.hpp>
BOOST_AUTO_UNIT_TEST(test_whatever)
{
...
BOOST_CHECK( whatever );
...
BOOST_CHECK( whatever );
}
I can then separate each related test into an individual test function,
and get reports based on the status of each test. For each related
test, I can just add another function...
BOOST_AUTO_UNIT_TEST(test_whomever)
{
...
BOOST_CHECK( whomever );
...
BOOST_CHECK( whomever );
}
I default to use the linker option. There used to be a header-only
option, but since I never used it I'm not sure if it still exists. If
not, you will have to link against the compiled Boost.Test library.
>
> > For more complex tests, I use the unit test classes,
>
> What classes, please?
There are many useful features to automatically test collections, sets
of items, templates, etc.
The one I've used most often is test_suite
(boost/libs/test/doc/components/utf/components/test_suite/index.html),
which provides extra flexibility to run and track multiple test cases.
One interesting feature I played with was using the dynamic test suites
to select test suites based upon the results of other tests. This is
really useful for running some distributed tests when some portions may
fail because of reasons external to the test. In these cases, the
suites can configure which tests run based on dynamic constraints.
> > but for almost everything else, the basic stuff is plenty.
>
> I think that might be my point.
Sure, but even with Boost.Test, you get lots of stuff with the "basic"
tests, such as reporting success/failure, and isolating test conditions.
Further, you have the additional features to handle more complex
conditions.
I have lots of tests that were written with BOOST_ASSERT before I
started using BOOST_TEST... there was a long time where I was
aprehensive about diving in, but once I did, I found it very helpful.
If I want simple, I can still get it.
> Yes. What do you get from those macros that's very useful beyond what
> BOOST_ASSERT supplies? I really want to know. Some people I'll be
> consulting with next week want to know about testing procedures for
> C++, and if there's a reason to recommend Boost.Test, I'd like to do
> that.
In Boost.Test terminology, you would replace BOOST_ASSERT with
BOOST_REQUIRE. If that's all you need, then you can do a simple
replacement. One immediate advantage is the logging and reporting
features of Boost.Test. It generates nice output, and can generate XML
of the test results as well. If you have to process test output, that
is an immediate advantage.
Beyond that, there are three "levels" CHECK, WARN, and REQUIRE. Output
and error reporting can be tailored for each level. Also, the levels
determine if testing should continue. For some tests, you want to halt
immediately upon failure. For others, you want to do as much as
possible. It is often beneficial to know that test A failed, while
tests B and C passed.
The "tools" come in many varieties. I use the EQUAL tools very
frequently, since when a test fails the log reports that it failed, and
it also reports the value of each item being tested. For example,
BOOST_CHECK_EQUAL(x, y);
will check to make sure that x and y are equal. If they are not, then
the test failure will print (among other stuff), the values of x and y.
This is a dont'care when tests pass, but when they fail, it is extremely
helpful since the failing state is readily available in the failure
reports.
There are also special tools to check for exceptions being thrown/not
thrown. It is trivial to write code that either expects an exception or
must not receive an exception. Writing tests for exceptional conditions
is usually difficult, but the tools in Boost.Test makes it much easier.
One of the things I like is the runtime test checks for macro
definitions and values.
All of those are VERY easy to use. The ones I have a hard time with are
the tools that provide checking for floating point numbers
(BOOST_CHECK_CLOSE and friends). It checks two numbers to see if they
are "close enough" to each other. Unfortunately, I couldn't get it
working how I understood, so I don't use it -- though I'd like to use
it.
I actually think that a big advantage of Boost.Test is the tools that
make specific testing much easier. Unfortunately, the one that just
about everyone gets wrong is testing the closeness of floats. I'm sure
I'm the only one with this problem, because I posted about it a while
back, and the response seemed pretty clear, but I still didn't
understand it.
You ought to skim over the reference documents:
boost/libs/test/doc/components/test_tools/reference/index.html
In addition, there are some nifty tools that allow you to easily write
tests for templatized functions, classes, and metatypes. I've not seen
anything like that, and to duplicate it with BOOST_ASSERT would require
a lot of test code.
> > Thus, after integrating Boost.Test into my process, I find writing
> > and maintaining good tests is not so difficult (I still have to
> > write proxy objects for complex integration testing... someone needs
> > to come up with a good solution for that).
>
> Proxy objects? Oh, I think someone described this to me: a proxy
> object is basically a stub that you use as a stand-in for the real
> thing? There was a guy I met at SD Boston who was all fired up with
> ideas for such a library -- I think he had written one. I encouraged
> him to post about it on the Boost list so the domain experts and
> people who really cared could respond, and he said he would,
> but... well, as in about 75% of such scenarios, he never did.
Right. They are very importat, especially for integration testing. You
create a proxy of the "other" object, and then do all your interaction
testing. You can make the other object behave in any way provided by
the interface so it is a good way to test. You can even make the other
object seem "buggy" to test error handling and such. What I would
really like is something like the expect scripting language for
integration testing.
I like it, but it is a huge PITA to write all those proxies. However,
when you do it, your actual integration testing is trivial. Sunstitute
the proxies with the real thing and run the same tests (this is where
some of the test_suite objects can come in handy as it allows you to
easily replace them and "borrow" tests).
> I don't see why that was hard before Boost.Test. I have absolutely no
> problem writing new tests using BOOST_ASSERT. Having the Boost.Build
> primitives to specify tests is a big help, but as I've said, that's
> independent of any language-level testing facility.
Right. However, with BOOST_ASSERT (or anything similarly simple), you
are limited in what you can do, and it is more difficult to track down
problems.
I could write tests in any manner, and trust me, I've written plenty.
However, in the end I always end up with some complex stuff. As I said,
the trivial tests could be done in any manner you like, but anything
beyond trivial tests are easier done, and failures are easier to
decipher, with Boost.Test. If everything is done with Boost.Test, it is
all integrated, similar, and the report processing is the same, no
matter the complexity of the test.
I didn't write it, and I'm probably not the best defender of its
functionality. However, I have found it to be extremely useful. If all
I had was simple tests, I probably would have never tried it. However,
my needs are more than simple, and it addresses the complex AND the
simple (though I wish it had EVEN MORE to offer, especially w.r.t. proxy
objects... did I say that already).
In fact, since it is used for all my NEWER tests, it is the most used
Boost component (our older tests are still done with assert() or writing
output to stdout/stderr and having scripts interpret the output, and
some are still even interactive -- if we are lucky they are driven by
expect or some other script).
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk