Boost logo

Boost :

Subject: Re: [boost] What Should we do About Boost.Test?
From: Mathieu Champlon (m.champlon_at_[hidden])
Date: 2012-09-29 17:32:04

On 29/09/2012 20:31, Gennadiy Rozental wrote:
> Sohail Somani <sohail <at>> writes:
>> Anyway, I haven't looked back yet and (sorry) I'm not sure I will.
>> Google Mock itself is unbelievably useful.
> Frankly, I can't see what the fuss is all about. An approach taken by
> Boost.Test is marginally better in my opinion. Mocks are deterministic
> and test case should not need to spell out expectations. Writing mocks
> is just as easy. You can see an example here:
> .../libs/test/example/logged_exp_example.cpp
> There is a potential for some improvement, but it is already better
> than anything else I know (IMO obviously).
Hi Gennadiy,

I'm a bit puzzled by the kitchen_robot example (and not just because it
grill chicken without any chicken ! :p).

MockMicrowave::get_max_power looks hard-coded to return a value of 1000,
so to me this looks like a stub rather than a mock object, or am I
missing something ?
How would you test the robot calls set_power_level properly in respect
to different max power values ? What if the max power could change at
any time and you would like to 'program' the mock object to test the
robot (first time return 1000, second time return 2000, etc..) ? And
what now about a malfunctioning oven which would throw exceptions ?
Would you write a new MockMicrowave implementation for every test case ?

Then there seem to be some kind of trace logging involved which if I
understand correctly can be seen as describing the expectations a
 From my understanding this logs on the first run then reload the
expectations and use them as a base to validate new runs.
I see a number of problems with this (despite the serialization
requirement on arguments to be checked), the major one being to not
allow TDD.
Also another common usage of mock objects in test is to document how the
object under test reacts to the outside world. Moving the expectations
outside the test makes this difficult (although not impossible, I
suppose the test cases and expectations log files could be
post-processed to produce sequence diagrams or something similar).

Actually we (my team and company) have attempted this approach in the
past. It works nicely on small use cases, but quickly tends to get in
the way of refactoring. When each code change fails a dozen test cases
which then have to be manually checked only to discover that a small
variation to the algorithm still produced a perfectly valid expected
output but with a slightly different resolution path, it tends to be
very counter-productive.
Therefore we started to add ways to relax the expectations in order to
minimize the false positive test failures : the number of times and the
order in which expectations happen, that some arguments sometimes are to
be verified and sometimes not, etc..
In the end spelling out the expectations started to look more like a
solution and less like a problem.
Do you have any experience on this matter ? Did you manage to overcome
this issue ?


Boost list run by bdawes at, gregod at, cpdaniel at, john at