|
Boost : |
From: David Abrahams (david.abrahams_at_[hidden])
Date: 2002-01-21 12:54:49
----- Original Message -----
From: "Peter Dimov" <pdimov_at_[hidden]>
> From: "David Abrahams" <david.abrahams_at_[hidden]>
> > This really seems like a specious argument. The same problem holds for
any
> > generic component, doesn't it?
>
> No, not exactly.
>
> A generic component, such as std::vector<T, A>, only needs to be tested
with
> suitable T and A that verify it's working properly w.r.t. the requirements
> it imposes on T and A, because std::vector's behavior is, in most part,
> independent of T and A.
It depends on exactly the operations that are in the concept requirements
for T and A. I don't think there's anything mysterious about policies in
this regard: if you're rigorous about writing the concept requirements, it's
pretty easy to say what should happen. You don't need to test a policy
inside the smart pointer framework - isn't "unit testing" about testing
components in isolation, after all?
Furthermore, a policy-based component is often just an "interface shell", so
testing that it behaves as advertised isn't all that hard: you just plug in
some simple testing policies which give you lots of information about what's
been used by the shell, and how. When you start delivering preconfigured
combinations using the component (e.g. boost::shared_ptr<T>), you need to
test those.
Really, I don't think this is terribly mysterious. It's just like any other
library testing problem: you can't envision all the ways in which it might
be used ahead of time. Fortunately, for a unit test, you don't have to try
to test the library in context. All you have to do is verify the components.
I think things get much trickier with generative programming, where all of
the configurations tend to be determined by the library, but there may be
thousands of them. The Boost Graph Library's adjacency_list is more like a
generative design than a policy-based one, and I think that approach places
a much higher testing burden on the library writer. This is mostly due to
the fact that generative designs tend toward a single monolithic component
(or component generator) which manages all the complexity behind its facade.
A policy-based approach breaks the design into smaller pieces which can be
verified separately.
-Dave
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk