Boost logo

Boost :

Subject: Re: [boost] Interest check: Boost.Mock
From: Gennadiy Rozental (rogeeff_at_[hidden])
Date: 2009-06-14 02:00:08


Peter Bindels wrote:
> Hi Gennadiy,
>
> 2009/6/11 Gennadiy Rozental <rogeeff_at_[hidden]>
>
>> Can you please give specific usage examples you have in mind.

[...]

> I must admit I'm a slight bit confused as to what kind of answer you're
> expecting.

Actually, I wanted to see how is it gonna look in conjunction with
Boost.Test as you mention in paragraph you skipped.

>> It does this by creating an object that is "derived" from a given class at
>>> runtime, and replacing the functions with functions that redirect to
>>>
>> What if I want to mock some concepts instead of interfaces?
>
>
> I haven't spent any time on mocking concepts so far but I suspect it to be
> possible. The last time I used ConceptGCC it had a start-up time of 30
> seconds and I haven't checked since. That was over a year and a half ago, so
> I suspect it's evolved since then. I need to check with it to see how to
> make it work.

Actually my question had nothing to do with Concepts from next C++
standard. What I wanted to know is if you library will be able to mock
classes being used to test function/method template in which case there
is no base class at all and just specific concept (collection of methods
and typedefs) is expected.

>> Can you show an example of how much effort is required mock something up?
>
>
> class IFoo {
> public:
> virtual int getLength(std::string what) = 0;
> };
>
> int test_function() {
> MockRepository mocks;
> IFoo *foo = mocks.InterfaceMock<IFoo>();
> mocks.ExpectCall(foo, IFoo::getLength).With("Hello World").Return(42);
> std::cout << foo->getLength("Hello World") << std::endl;
> }
>
> This is close to the lower limit for using a mock at all. The first line of
> the test can be put into the testing framework making the

Interesting. I gave perfunctory look on your docs and code and aside
from various implementation concerns (define ExpectCall is bad, not good
at all and I believe you don't actually support pure virtual functions)
I am under impression that you trying to hack into compiler
implementation of virtual functions (and maybe something else). You code
is not required to work according to standard, right?
  If this is the case it might be a tough sell (at least on my side),
though my even bigger concern (explained below) is overall approach to
interactions testing.

>> Boost.Test does have some support for interaction based testing already.
>> Including class mock_object <boost/test/mock_object.hpp>. That said I'd be
>> happy to offload this part and support your efforts. I'd like to know
>> though:
>
>
> I've spent a bit of time searching the Boost mailing list and the Boost.Test
> docs beforehand to see if something like this was already implemented, but I
> failed to see it. I think it's completely absent in the docs. As far as I

Yes. I never got to actually writing docs for this functionality.

> can tell, mock_object tests afterwards. I must admit that in the hour I took
> to look at it, I suspect I haven't quite figured out what it does. The three
> test cases I found that uses it (one which logs output and two that test
> exception safety) didn't seem to adequately explain the complexity of the
> code.

Not sure what part you find complex. mock_object.hpp for the most part
is just definition of simple class that mocks most generic functions
existing in C++ (like constructors, assignments, various operators etc).

>> 1. How does your solution compares with what I have in this header?
>
>
> I think it is somewhat comparable, although less desirable for a developer
> of end-code. For testing within Boost it may well have an upper hand, being
> more general. Most people, however, do not develop Boost.

This functionality has nothing to do with testing boost.

> Your mock_object requires inheriting from it and implementing all the
> functions using a default implementation, which increases the amount of code
> and reduces maintainability for the test code. It does work in the case of
> multiple inheritance and is easier to port to other compilers and platforms.
>
> My solution requires setting up expectations beforehand and it checks them
> with the fidelity that you choose. It requires you to put in your test what
> lower-level functions you expect it to call and what ordering relations
> between them you expect. It makes the actual creation of mock object classes
> implicit - nobody ever creates a full class that a compiler sees. This
> significantly reduces the chance of typos. It does require you to specify
> all functions that will be tested because otherwise they'll have no
> implementation.
>
> There are a few drawbacks, mainly in the category of multiple inheritance
> (currently MI doesn't work) and awkward compilers (at least the EDG-based
> GreenHills compiler has an anomaly in its virtual function tables that makes
> a single test case not work).

I looked into you your docs/code and I must say I disagree with most of
the above points (from the prospective of what is advantage and what is
disadvantage).

Originally, when I started to work on interaction based testing support
in the Boost.Test I looked around and found two predominant approaches:

1. Expectations explicitly specified along with function being tested.
This is essentially what your library does.
2. Expectations are first recorded in some way (again with some explicit
function calls or code under test being executed second time) and later
tested against.

 From my experience first approach is unacceptable in most usage cases.
Interactions based testing in it's nature is borderline "implementation
testing". Thus it leads to expectations being changed comparatively
frequently (in comparison to the interfaces and other instances of state
based testing). Accordingly, you end up changing these expectation very
frequently inside your test modules. It becomes very tiresome manual
work if you have it in many test cases and what is worse it's frequently
difficult to see immediately what actually has changed. For example
there maybe new call somewhere in the middle and you end up reporting 10
errors of mismatched calls.

Second approach has it's downsides either - we do not want your test
code always look like duplicate. What I end up doing is logged based
expectations.

In this approach your code looks like you just create some mocks and
execute test function. This test case can be run in two "modes". Log
mode and test mode. In first mode it stores expectations in log file. In
second mode it test against it. If there are some differences reported
you can generate new log file and compare it using regular diff, thus
easily find what changed. In majority of the cases you find that changes
are expected, you replace log file and that's it. No changes to the test
code is required.

As for your statement that my approach requires to implement mocks and
thus decease maintainability, I believe it's actually quite the
opposite. Instead of having to tell in 50 different test cases that now
we expect this call, that call and third call. I implement mock *once*
and do not need to encode expectation nowhere anymore.

 From what I can tell, my approach covers all that you can do (while
being portable) and some that you can't. For example your library can't
be used for exception safety testing, while Boost.Test solution includes
support foe "decision points" inside mocks that enables it.

Boost.Test interaction based testing support is not 100% production
quality for my taste yet (and obviously lacking docs), but I still
prefer it to what you present. It might make sense to combine these
approached in one comprehensives library. If one for whatever reason
prefers explicit expectation specification, one should be able to do so
I guess. Also logging might needs to be made a bit more powerful.

> 4. Will your solution support interaction-based testing facilities inside
>> the Boost.Test (exception safety testing, tests for logged interaction
>> expectations)?
>
>
> Testing a log that has been recorded is mostly identical to telling the log
> beforehand and testing based on that. The minor advantage is slightly less
> setup work, which likely translates in not having to think about the
> interaction of a test. Adjusting the code to include a default
> implementation (such as the one for recording a log) is very little work, in
> the order of minutes. The main downside is that functions that do not have a
> default implementation or return value, but by default throw an exception.
> Before they throw an exception, any code can be included but it does not
> know which function it is that is being called.

In "record" mode it should not throw, it should log what it see. Also
framework should report all diffs, not the first one.

>> For Boost the main changes would be superficial in the naming and in using
>>> more default libraries instead of a new implementation. For Hippo Mocks
>>> I've
>>> decided to make no assumptions in anything over C++98 so there's a full
>>> Tuple implementation and so on.
>>>
>> Why not use boost?
>
>
> So far, to include mocking functionality, no paths have to be set up, no
> software installed, no configuration done, no prerequisites installed.
> Requiring boost would be a major step up from that.
>
> If it were to be integrated into boost, this of course falls flat. That
> would make for a lot more generic code in the implementation.

I am not sure about general community opinion, but having library
completely designed and built on non-standard implementation details of
the compilers is not something I'd like to see in boost (Obviously not
an issue if it's not the case).

Regards,

Gennadiy


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk