Boost logo

Boost :

Subject: Re: [boost] What would make tool authors happier..
From: Robert Ramey (ramey_at_[hidden])
Date: 2015-06-03 18:46:30


On 6/3/15 2:21 PM, Rene Rivera wrote:

> No matter how bright anyone is an automated approach, as above, can't
> account for human nature. In particular your approach..
>
> First misses the following test directories (ones that are currently listed
> for testing): libs/concept_check,

Doesn't have a test directory

libs/container/example,

Hmmm - I wouldn't expect that to get tested. I have that directory in
the serialization library and it never gets tested even though it
includes a Jamfile.v2
> libs/core/test/swap,

Hmm core includes a Jamfile.v2 I'm not sure why that wouldn't get test

libs/disjoint_sets, libs/dynamic_bitset,
> libs/hash/test/extra, libs/interprocess/example, libs/move/example,
> libs/regex/example, libs/static_assert, libs/unordered/test/unordered,
> libs/unordered/test/exception, libs/wave/test/build.
>
> Second it adds the following not to be tested (maybe.. as we don't really
> know the intent of the authors) directories: libs/chrono/stopwatches/test,
> libs/compute/test (this looks like a true missing tested lib but I can't be
> sure without asking the compute author), libs/config/test/link/test,
> libs/filesystem/example/test, libs/functional/hash/test (I'm shacking my
> fist towards the functional authors!), libs/gil/io/test,
> libs/gil/numeric/test, libs/gil/toolbox/test.
>
> And oh how painful it was to visually have to compare two large lists of
> test dirs to discover manually that information!

well, maybe I'm just buying your argument that library authors must
adhere to some reasonable conventions if they want to get their stuff
tested. I think that's your original point. You pointed out as an
example the multi-level directory in numeric. I guess that mislead me.

Sooooo - I'm going to support your contention that it's totally
reasonable and indeed necessary that library authors adhere to some
reasonable conventions if they want thier libraries to build and test in
boost. All we have to do is agree on these conventions. Here's my
starting point:

Libraries should have the following directories with jamfiles in them

build
test
doc

Libraries can be nested if they adhere to the above convention.

... add your own list here

So we can agree on that. (I think).

Now I've got another issue. Why can't we just run the local testing
setup and upload the results somewhere. Right now I'd like users to be
able to run b2 in the directories which interest them, and upload
summary test results to a server which would display them. This would
mean that testing would be much more widespread.

The current system requires that one download a python script which ...
does what? it looks like it downloads a large number of other python
scripts which then do a whole bunch of other stuff. My view is that
this is and always has been misguided.

a) It's too complex
b) too hard to understand and maintain.
c) isn't usable for the casual user
d) requires a large amount of resources by the test
e) which he can't figure out in advance
f) and does a lot of unknown stuff.

Wouldn't be much easier to do something like the following:

a) pick a library
b) run b2
c) run process jam log
d) run X which walks the tree and produces a pair of files - like
library_status does.
e) ftp the files to "someplace"
f) the reporting system would consolidate the results and display them.

This would be much more flexible and easier. It would be much easier to
maintain as well.

Of course this is somewhat speculative as it's not clear to me how the
python scripts work and it's not clear how to see them without actually
running the tests myself.

I've been very happy with this whole system for many years.

Robert Ramey


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk