Boost logo

Boost :

From: Jason Sankey (jason_at_[hidden])
Date: 2007-09-29 23:35:29


David Abrahams wrote:
> on Wed Sep 26 2007, Jason Sankey <jason-AT-zutubi.com> wrote:
>
>> David Abrahams wrote:
>>> on Fri Sep 14 2007, Jason Sankey <jason-AT-zutubi.com> wrote:
>>>> You may have noticed the discussion going on re: developer feedback
>>>> systems. As part of this I have just started setting up a demo build
>>>> server at:
>>>>
>>>> http://pulse.zutubi.com/
>>>>
>>>> At this stage it is an early demonstration of the server (Pulse)
>>>> building the boost trunk. There is only one machine, but for
>>>> illustration purposes I am running both the master and one agent on it.
>>>> This shows the ability to build on multiple agents in parallel (to
>>>> test the different OS/compiler combos). In this case I am testing two
>>>> different gcc versions (the easiest to setup for the demo).
>>> Very cool to see it working. Sorry it's taken me so long to respond.
>> OK, I thought for a bit that enthusiasm had been lost. There were a
>> couple of quick and positive responses, though, and I'm glad you got a
>> chance to take a look too.
>
> Yeah, sorry -- things have been a little crazy over here.

No problem! We all know what it's like.

<snip>

>> The default Pulse behaviour is to do a full clean checkout and build.
>> However, there is an option to switch to incremental builds, where the
>> same working copy is used for every build after an svn update to the
>> desired revision. The reason I steered clear is that I noticed a
>> mention somewhere in the regression testing documentation that
>> incremental builds were not 100% reliable.
>
> It has the same unreliability that most projects' builds do: the
> #include dependency checkers can be fooled by directives of the form
>
> #include MACRO_INVOCATION()
>
> It's still very useful to do incremental builds, but it makes sense to
> build from scratch once a day.

I see. I guess the problem is if incremental builds are known to have
issues, will the attitude to them be different? If people are used to
builds failing due to incremental problems they may begin to ignore
failures. This can be especially true of people who have previously
wasted time tracking down a failure that turned out to be due to
incremental issues. If the problems are extremely rare this might not
be an issue, and I can definitely set it up to see what happens. The
potential benefits of incremental builds are certainly worth a try.

>> As suggested elsewhere, breaking things down library by library would
>> also help. I have noticed a bit of discussion going around about this
>> lately, and have to say that I think it would be very helpful for
>> integration with Pulse.
>
> That's good to know.
>
>> Apart from faster builds, it would also make it easier to see the
>> status of each library if it were a top-level Pulse project, and
>> developers could then subscribed to email/jabber/RSS notifications
>> for just the libraries they are interested in.
>
> Interesting. So what, exactly, does Pulse need in order to achieve
> these benefits? Reorganization of SVN? Separate build commands for
> each library?

The most important thing would be the ability to build and test a single
library. In the simplest case this could involve checking out all of
boost and having all dependent libraries built on demand when building
the library of interest. Then the tests for the library of interest
could be executed and the results output in some readable format (like
the current test_log.xml files). This wouldn't necessarily require any
reorganisation of Boost: I guess that building a library independently
is already possible, I'm just not sure about running the tests.

Further on, further optimisations could be done. Reorganising
Subversion to allow just the library of interest to be checked could
help a little (although this won't save much real time). More important
would be allowing pre-built versions of the dependencies of the library
to be picked up so that the build time is reduced.

>>>> 2) What important features you think are currently missing.
>>> Integration with the XML failure markup is the most crucial thing.
>> OK. I need to understand these a bit better before I continue. I am
>> not sure at what stage in the process these normally take effect.
>
> IIUC they are processed by the code in tools/regression/xsl_reports/,
> which currently runs on the servers that display our test results.
>
>> I guess a lot of the failures I am getting now are actually known
>> and included in this markup?
>
> You can check by looking at status/explicit-failures-markup.xml in
> whatever SVN subtree you're testing.

OK, thanks for the pointers. Hopefully I will have a chance this week
to take a look.

>> I need to find some time to dig into this.
>>
>>>> 3) How some of the errors/failing tests can be resolved.
>>> Not connected to the 'net as I write this; I might be able to look
>>> later.
>> OK, thanks. Getting to a state where a normal build is green will make
>> things a whole lot more useful.
>
> If you're testing trunk, you may never get there because IIUC it isn't
> very stable. I suggest you run your tests on the 1.34.1 release tag
> at least until you see all green.

OK. This is interesting, because in my experience this will greatly
reduce the value of automated builds of the trunk. The problem is
basically broken window syndrome: if it is normal for the build to be
broken people care less about breaking it even further. Perhaps it is
expected that the trunk is unstable and other branches are used to
stabilise releases. Even then, though, if people are not taking care
then the trunk can drift further and further from stable making it a
real pain to bring everything up to scratch for a release. For this
reason my personal preference is to have the trunk (as the main
development branch) be stable and green at all times and for any
unstable work to happen on isolated branches. Of course all of this is
just my opinion so feel free to ignore me :).

This may also be another argument for splitting things up by library.
At least that way libraries that do keep a green trunk can get the
benefits without the noise of failures in other libraries.

Cheers,
Jason


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk