Boost logo

Boost :

Subject: Re: [boost] How do you test complex transformations?
From: Vicente Botet (vicente.botet_at_[hidden])
Date: 2011-03-18 19:44:56


Simonson, Lucanus J wrote:
>
> Vicente Botet wrote:
>> Mathias Gaunard-2 wrote:
>>>
>>> On 18/03/2011 18:23, Vicente Botet wrote:
>>>
>>>> One technique could be to make the transformation by hand and keep
>>>> track of the input -> output association.
>>>> Another technique consist in re-implementing the algorithm and
>>>> check that both implementations match.
>>>
>>> Checking that both implementations match is not really possible if
>>> the range of all inputs is big.
>>>
>>> What we do to validate the precision of our math functions in the NT2
>>> project is that we perform random tests and compare the results to
>>> that obtained with a library that we know to be correct.
>>>
>>
>> Hi,
>>
>> yes, I'm in the same situation, the samples are often generated
>> randomly.
>>
>> If you are developing new algorithms, this is equivalent to implement
>> the algorithm (possibly with less constraints) and compare the
>> results. Of course both implementation can be wrong.
>>
>> If you didn't have a library that gives you the correct answer, do
>> you think that you will find re-implementing the transformation a
>> good practice?
>
> I do. Typically I implement a brute force version first that can be very
> slow then follow that up with the real implementation. I prefer to take a
> known correct implementation to test against if I can get one, but if it
> doesn't I implement one.
>

This seems reasonable.

> Also, when I comes to testing I will often test post conditions of the
> algorithm separate from eachother. Even if I can't programatically detect
> a 100% correct result, I can detect trivially wrong results with this kind
> of test. For example, if the result of some calculation has to be
> positive running it on many randomly generated inputs and checking that
> the result is always positive is a quick and easy way to isolate silly
> problems. The simplest example of this is crash testing on random data
> where the only thing you are looking for is that the code doesn't crash or
> hang.
>

You are right. Most of the times there is something that can be tested even
if we can not test everything and this helps.

>
> If both implementations of the algorithms match for a large set of
> randomly generated test cases, satisfy all checkable post conditions and
> are correct for several well chosen unit test inputs then you can have
> pretty high confidence. Of course, if your understand of the problem is
> wrong then both implementations are going to be wrong, but unit testing
> can't catch requirement bugs.
>

Yes, the best is that is someone else that make the unconstrained
implementation.

>
> I often try several alternative implementations to experiment and compare
> which turns out to be faster. Is a branchy algorithm faster or slower
> than a table lookup algorithm? Is bit twiddling faster than a switch
> statement?
>

This seems reasonable for critical parts.

> I also try to break it down into seperately testable sub pieces. For
> example, instead of one very large large table of size n*m mapping input
> to output I might break it down into two smaller tables of size n and m
> and then combine their results. If n*m is too large to populate the table
> manually, but n+m is small enough I might implement a complex
> transformation by a series of table lookup. Then you need to prove that
> your idea for how to break it down is correct, but this can be done with
> paper and pencil, then you just need to validate each entry in the small
> tables manually then validate serveral of their combinations manually and
> your pencil and paper proof should provide enough confidence that all
> combinations will come together correctly. I always try to "prove" that
> the idea I'm implementing is correct up front, which can be thought of as
> testing you do in your head before you have code. Also, if you can tell
> up front what will be hard to test you can steer your design decisions
> toward implementations that are easier to test.
>

Thanks for sharing your experience. This helps a lot,
Vicente

--
View this message in context: http://boost.2283326.n4.nabble.com/How-do-you-test-complex-transformations-tp3387869p3388734.html
Sent from the Boost - Dev mailing list archive at Nabble.com.

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk