|
Boost-Build : |
Subject: Re: [Boost-build] Using bjam testing rules, and boost.test with CI systems
From: Johan Nilsson (r.johan.nilsson_at_[hidden])
Date: 2010-07-07 03:58:27
Anthony Foglia wrote:
> Johan Nilsson wrote:
>> Anthony Foglia wrote:
>>> Sorry to wait so long to respond, but work interfered.
>>
>> I assume this is not work then :)
>
> Wrong. This is work. Just management hasn't cared that our CI system
> has been down for a year, so I've had to squirrel away time to set up
> a simpler one.
Well, I wrote that tongue-in-cheek.
>
>
>>> Johan Nilsson wrote:
>>>> Anthony Foglia wrote:
>>>>> I feel that somehow I should be able to tell boost build's test
>>>>> functions that in the .output files I want just the output, not
>>>>> the output+status.
>>>>
>>>> You could always add an additional Hudson build step after running
>>>> the actual build + tests, that removes the offending status lines
>>>> from the logs using e.g. sed or grep (or whatever).
>>>
>>> That was my goal, but things are not that simple. We have different
>>> libraries each with their own tests directories. I could make
>>> aliases for them in the Jamroot, but then I have no easy way to
>>> know, in my Hudson build step, which files are generated to run sed
>>> on, other than a find command.[*]
>>
>> That should be enough, shouldn't it? Or why not run a Ruby/Python
>> script to find all files and postprocess them to another set of
>> (wellformed) output files, and perhaps put them in a special
>> directory structure.
>
> Yes, it should. It just feels very kludgy to me. Especially when I
> am close, except for a single line.
I'm not really saying you should leave the kludge in place. I'm saying go
for the kludge, get the CI running to provide feedback and value to you and
your team, _then_ improve upon it's implementation.
> Plus, Hudson does issue a
> warning if the unit test output isn't new, so unless I do dependency
> analysis, I won't see those warnings if a test is accidentally
> deleted.
IMHO that warning is a non-feature to me (at least in later versions of the
plugin it is possible to disable it, which I do).
The xUnit plugin does not understand that not all unit test output have to
be updated at each run when you do incremental builds - this caused me to
have builds failed because Boost.Build made proper dependency analysis, only
updating the test targets (and thus their outputs) for which the underlying
code did change.
> (It's simple dependency analysis, but generating files only
> if input files have been updated is exactly what a build system is
> designed for. Writing one to avoid a build systems quirks feels
> uncomfortably bad.)
Feels bad, yes. I still think you should improve in small steps (going from
no CI to CI is of course no small step in terms of value to developers).
> I feel part of the reason boost-build isn't widely used is because
> it's unfriendly and de facto designed solely for the Boost libraries.
I'd have to disagree on that (both parts). Extending it might be unfriendly,
but using the parts that are there is all but unfriendly.
> If it were easier to use, more general and less customized for
> Boost's quirks, then perhaps more people would pick it up.
Or perhaps if it simply wasn't called _Boost_ Build.
>
>>> Even if it did work, the rule would basically be when the output
>>> file (e.g. test-executor.output) changes, create a new (saner) file
>>> from that (e.g. test-executor.pure-output). Isn't that what a build
>>> system is designed for? So shouldn't I be using boost-build for it?
>>>
>>> Do the boost library developers use boost-build and Boost.Test for
>>> testing? How do they tie it into a continuous integration system,
>>> or do they not?
>>
>> As for individual boost library development, I'd guess it's pretty
>> much up to everyone to do as they please. There's nothing stopping
>> them from running CI when developing their libraries (even if I
>> suspect that most of them don't).
>>
>> The official testing performed for the Boost C++ libraries (as in
>> all of them "together") are the Boost Regression Tests, which I
>> guess you know about. AFAIK this isn't run CI-style, rather on a
>> fixed-interval basis (likely due to the huge turnaround times). IIRC
>> there's been discussions on running tests (incrementally?) on a
>> check-in basis but I don't know what's happening on that front.
>
> The part I'm trying to understand is the automated test running and
> parsing of the output, which wouldn't matter necessarily differ
> between compilations triggered by a check-in, or triggered by a clock.
Of course not. But it's still not CI.
>
>> You could checkout the Boost Regression Tests scripts to find out how
>> the XML is generated/handled, see:
>>
>> http://www.boost.org/development/running_regression_tests.html
>
> Thanks for the pointer. I've downloaded the code and started looking
> in the regression.py file.
>
> It looks like Boost gets the results in email format, in which case,
> you'd want to append the exit code. Still, I don't see any major
> obstacle to having their regression test handler do the concatenating
> of the output and an exit code stored in two files.
>
> I've put in a bug report on this, though now that I see how Boost is
> using it, I don't expect any progress to be made on it.
>
>> Perhaps Ryppl (http://ryppl.org) will help out here in the future.
>> Looks promising, even though I'd personally hoped for Mercurial
>> rather than Git, and perhaps Boost.Build rather than CMake ...
>
> Very interesting proposal. This implies Boost will switch to CMake as
> the preferred build tool, though I thought there was a thread on the
> Boost-users list two months ago implying that wasn't going to be the
> case. (I can't seem to find it at the moment.)
Yes, staying with Boost.Build seems to be the case.
I guess though that if Ryppl turns out to be a great tool, and it does not
support Boost.Build, this might change (unfortunately, IMO).
>
>>> Can I write my own rule to take the .output files and generate
>>> .pure-output and .exitcode files from them? I've never written my
>>> own rule, plus, it looks like I would want a generator that works
>>> with RUN_OUTPUT types, but because that target is hidden by the run
>>> and run-fail rules, I don't see how to get the name of that target,
>>> or how to use it without skipping the run and run-fail rules.
>>
>> Checkout the Boost.Build extender manual if you have not already. You
>> should be able to do this using a custom generator (if you've never
>> written even a rule yourself, prepare for investing quite some time):
>>
>> http://www.boost.org/doc/tools/build/doc/html/bbv2/extender.html
>
> Yeah, I checked that out and it didn't really help. The extender
> example is about parsing a new source type, and there might be a few
> little comments on adding a new target type. But I would need to
> force my way into accessing an intermediate type, which I don't see. Plus,
> there's a bunch of "magic strings" in the example that aren't
> quite obvious to me. I'll try to write a more detailed email
> tomorrow.
>> [I'd still personally go with the postprocessing approach until you
>> can switch to a more recent Boost.Test version if you don't want to
>> try out my other suggested approach]
>
> I'd like to come up with a solution, however temporary, before we try
> upgrading Boost. I'd rather have the tests to support us first.
I didn't suggest upgrading all of Boost. I think it would be feasible to
just patch the Boost.Test parts (if you have stored Boost in a local repo)
or perhaps use bcp to extract the latest Boost.Test into its own namespace,
even though a bit messy.
/ Johan
Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk