|
Boost : |
From: David Abrahams (dave_at_[hidden])
Date: 2007-09-29 18:49:37
on Wed Sep 26 2007, Jason Sankey <jason-AT-zutubi.com> wrote:
> David Abrahams wrote:
>> on Fri Sep 14 2007, Jason Sankey <jason-AT-zutubi.com> wrote:
>>> You may have noticed the discussion going on re: developer feedback
>>> systems. As part of this I have just started setting up a demo build
>>> server at:
>>>
>>> http://pulse.zutubi.com/
>>>
>>> At this stage it is an early demonstration of the server (Pulse)
>>> building the boost trunk. There is only one machine, but for
>>> illustration purposes I am running both the master and one agent on it.
>>> This shows the ability to build on multiple agents in parallel (to
>>> test the different OS/compiler combos). In this case I am testing two
>>> different gcc versions (the easiest to setup for the demo).
>>
>> Very cool to see it working. Sorry it's taken me so long to respond.
>
> OK, I thought for a bit that enthusiasm had been lost. There were a
> couple of quick and positive responses, though, and I'm glad you got a
> chance to take a look too.
Yeah, sorry -- things have been a little crazy over here.
>>> You might also notice the Pulse is kicking off a build when it detects
>>> any change, and shows the change information (also linked to Trac for
>>> diff views etc). This should keep the machine busy, since a build takes
>>> over 2 hours (partly because two builds are running in parallel, but
>>> mostly just because the build takes that long). Perhaps there is a
>>> shorted build/test cycle that should be run on every change for faster
>>> feedback.
>>
>> I don't know how you're invoking the build, but if you're using
>> regression.py, there is an --incremental flag you can pass that avoids
>> rebuilding things whose dependencies haven't changed.
>
> I am actually invoking things directly using Pulse. Pulse checks out
> the source from svn and I use Pulse commands to run the build, in a
> similar way to how other testing scripts appear to work:
>
> http://pulse.zutubi.com/viewBuildFile.action?id=1015903
>
> I had some trouble figuring out the latest and best way to run tests,
> but this seems to work.
Seems OK.
> The default Pulse behaviour is to do a full clean checkout and build.
> However, there is an option to switch to incremental builds, where the
> same working copy is used for every build after an svn update to the
> desired revision. The reason I steered clear is that I noticed a
> mention somewhere in the regression testing documentation that
> incremental builds were not 100% reliable.
It has the same unreliability that most projects' builds do: the
#include dependency checkers can be fooled by directives of the form
#include MACRO_INVOCATION()
It's still very useful to do incremental builds, but it makes sense to
build from scratch once a day.
> As suggested elsewhere, breaking things down library by library would
> also help. I have noticed a bit of discussion going around about this
> lately, and have to say that I think it would be very helpful for
> integration with Pulse.
That's good to know.
> Apart from faster builds, it would also make it easier to see the
> status of each library if it were a top-level Pulse project, and
> developers could then subscribed to email/jabber/RSS notifications
> for just the libraries they are interested in.
Interesting. So what, exactly, does Pulse need in order to achieve
these benefits? Reorganization of SVN? Separate build commands for
each library?
>>> 2) What important features you think are currently missing.
>>
>> Integration with the XML failure markup is the most crucial thing.
>
> OK. I need to understand these a bit better before I continue. I am
> not sure at what stage in the process these normally take effect.
IIUC they are processed by the code in tools/regression/xsl_reports/,
which currently runs on the servers that display our test results.
> I guess a lot of the failures I am getting now are actually known
> and included in this markup?
You can check by looking at status/explicit-failures-markup.xml in
whatever SVN subtree you're testing.
> I need to find some time to dig into this.
>
>>> 3) How some of the errors/failing tests can be resolved.
>>
>> Not connected to the 'net as I write this; I might be able to look
>> later.
>
> OK, thanks. Getting to a state where a normal build is green will make
> things a whole lot more useful.
If you're testing trunk, you may never get there because IIUC it isn't
very stable. I suggest you run your tests on the 1.34.1 release tag
at least until you see all green.
-- Dave Abrahams Boost Consulting http://www.boost-consulting.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk