Boost logo

Boost :

From: Gennadiy Rozental (gennadiy.rozental_at_[hidden])
Date: 2007-06-05 01:43:38

"Stefan Seefeld" <seefeld_at_[hidden]> wrote in message
>> Why don't we fix'em in 1.35?
> because that is mere terminology. The question is not how we call the next
> release, but what to focus on now. Should the scope be regression / bug
> fixes,
> or new features ? (It is evident that for us developers new features are
> always
> more appealing. But that's not how we can build a reputation for
> high-quality
> software.

I don't really want to go into what is more important and how we need to
build something. In my proposition there is no much space for "we" at all.
Every individual developer have to make a decision independently. We (as a
comunity) can't wait for someone to fix the bug, if that someone is opted to
work on something else first. And we (as a comunity of volontiers) do not
have much leverage to enforce it.

So my position is: STOP thinking about boost as a whole. We can't fight for
the world peace.

We accept library if at the time of the review we find it worth it.
We (or rather some of us) provide a facilties for the developer to maintain
the library.
If library is a "good citizen" we present it as a part of aproved boost libs
If library is not maintained it's either freezed in some stable state or
droppped altogether.

The only thing we can enforce: you can't break some other boost lib (good
citizen! - remember). Otherwise you on your own.

>>>>> * We don't test the build and install process.
>>>> What do you want to test? In any case it doesn't make release
>>>> "unstable"
>>> The release (well, in fact, packaging) process was retarded because a
>>> substantial
>>> number of bugs only turned up during that very last phase, simply
>>> because
>>> that
>>> wasn't tested at all. Had packaging (etc.) be part of the regular
>>> testing
>>> procedure
>>> those bugs weren't present, at that time in the release process.
>> I guess it would be nice. Do you know how implement these tests in
>> practice
>> (I mean without human involvement)?
> As Rene and Doug point out, the first thing to do is run the same set of
> tests
> that are already there, but not against a full boost source tree, but
> against
> a installed boost version. That lets us detect errors introduced during
> installation.

I don't believe inventing new tests we have to run is our priority here. I
don't argue it can be usefull, but it's definetly "later stage".
And another point: IMO we can't practically test everything that is usefull.
Let's say we start releasing subsets. Do we want to check how tests behave
if particular subset is installed? Two intersecting subsets? Two independent
subsets? We need to draw a line somewhere between what is usefull and what
is required. If regression testing setup is made easy enough hopefully we'll
get some volontiers to test some "custom" usefull tests.

>> That's the problem. No one seems to make an effort to read what I
>> propose.
>> My solution assumes that no testing is done during release, none
>> whatsoever.
>> Only components that already tested and individually released by
>> developers
>> will go into umbrella boost release.
> OK, that's an interesting point, but only displaces the problem. Then we

What problem?

> need to discuss the release procedure of those components, and talk about
> how these are stabilized. Etc.

Yes. That what second part of my proposal deals with.


Boost list run by bdawes at, gregod at, cpdaniel at, john at