|
Boost Users : |
Subject: Re: [Boost-users] Boost.Test - Advice needed on structuring tests
From: Adam Nielsen (a.nielsen_at_[hidden])
Date: 2010-04-09 22:08:59
> As I'm new to unit testing I'm unsure the tests should be structured and am
> keen to follow good practice.
I'm also fairly new to unit testing, but I'm in a similar situation as you
(needing set up/tear down code.)
> I'm thinking of structuring the unit tests as follows
> Unit test 1 - connect to database
> Unit test 2 - create table
> Unit test 3 - insert record into table
> Unit test 4 - select record from table
> Unit test 5 - (perform some other SQL statements)
> Unit test 6 - disconnect from database
> Some of these tests could be considered as setup and teardown code, but I
> want the unit tests to cover all code in my wrapper.
Applying your situation to the way I have done things, I have created tests 1
and 6 so that they test the connection/disconnection code thoroughly. This
means connecting with valid/invalid credentials and generally testing all
eventualities possible in the connection and disconnection code.
Then with tests 2-5 I have used the Boost Test frameworks' "fixtures", which
lets you run the same set up code before each test. Here I have just done a
quick connect, and assumed it will work fine, i.e. it's not considered part of
the test. After all, any problems here should have been picked up in test 1.
> What I'm unsure about is whether it's considered good practice for one unit
> test to depend upon a previous unit test creating something in order to
> allow the test to be performed.
This is the only sticking point I have come across. Should test 1 fail, and
you can't connect to your database, the rest of the tests will run anyway,
giving you six failures. You then have to examine all six before realising
the problem was identified by the first test.
What I would really like is some way of saying that should test 1 fail, none
of the other tests should run, because I know they'll all just fail too and
clutter the output, making it harder to track down the problem.
> The other issue I've got is whilst the function may not have thrown an
> exception it doesn't necessary mean the function did what its supposed to
> do. For example unit test 2 might not throw an error, but the only way I'll
> know if the function worked correctly is to query the database metadata or
> do unit test 3. Should unit test 2 only check that an error was not thrown
> or should it also call another function to query the metadata and then check
> the result of that function before marking unit test 2 as a success?
You will need to check the result either way to make sure the code is working
correctly. If you only check for errors then you're checking that the code
fails correctly, but not that it actually works. If you rely on other tests
then you may get a failure in test 3 (could not insert record) when the
problem was actually caused by test 2 (failing to create the table.) This
will certainly give you headaches trying to figure out where the problem is!
In my case I am checking that a file has been edited correctly, so I have
added a "check" function to my fixture (so that it can be called by all of my
tests.) At the end of each test I use BOOST_CHECK_MESSAGE() to call check()
with a string containing the correct file contents, then check() returns an
error if it doesn't match, causing the test to fail.
I am not sure how you could do this with a database, other than perhaps
retrieving a whole table into an array and comparing it against a hard-coded
array. Of course this would only work for small tables. Maybe you could
count rows or something, but in my case I wanted a single check that I could
reuse for many different tests. If you have to write your own 'check'
function for every single test there's a much greater risk of making a mistake
and allowing the test to incorrectly succeed.
Perhaps in your case a select statement that returns a single value might
work? Then you could do something like this at the end of each test:
BOOST_CHECK_EQUAL(run_query("SELECT COUNT(*) AS result FROM ..."), 10);
BOOST_CHECK_EQUAL(run_query("SELECT name AS result ... WHERE id = ..."),
"Smith");
This would at least allow you to reuse the same code to perform a bunch of
checks at the end of each test, minimising the chances of mistake.
As to the other part of your question, just because the query code will be
checked in another test it doesn't mean you can't use queries here. Just like
the connection code above, run_query() in my example will be short and won't
fully test the query mechanism, but you will have a proper test for that.
Anyway, hopefully this gives you some ideas!
Cheers,
Adam.
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net