Boost Users :
Subject: Re: [Boost-users] [boost] Maintenance Guidelines wiki page (Revison 8)
From: Vicente Botet (vicente.botet_at_[hidden])
Date: 2008-11-26 12:39:50
----- Original Message -----
From: "David Abrahams" <dave_at_[hidden]>
Sent: Wednesday, November 26, 2008 5:42 PM
Subject: Re: [boost] Maintenance Guidelines wiki page (Revison 8)
> on Wed Nov 26 2008, "vicente.botet" <vicente.botet-AT-wanadoo.fr> wrote:
>> I have updated the Maintenance Guidelines wiki page.
>> I have take in account the minor proof-reading modifications from
>> Steve Watanave which I have overridden by error (thanks Steve!),and
>> added a lot of thinks, maybe too much :)
>> Please read this mail completely before jumping to the page. This
>> page need to be completed and reworked. Please be free to add new
>> sections to the page or improve the current ones directly or post your
>> comments or suggestions to this list.
> That page looks like a great start.
>> 3* Update the Developer, User and Release manager guidelines cross reference (7,8,9)
>> 4* Separate on 4 pages. Main, Documentation, Coding, Test and
> IMO you really need separate pages for the three audiences (Developer,
> User, Release Manager). There's no way users are going to read a page
> full of developer maintenance guidelines except as a point of interest.
Yes. Robert R. has already signaled the same. I 'll see what I can do.
> Frankly I think you _might_ be being a little too ambitious; I didn't
> expect anything other than guidelines for developers. If we end up with
> more than that, it's wonderful, but that's the most urgent need.
IMO, the user have an important role to play, they are the more interested in.
>> Other points I see:
>> A* See if the Jamfile to test the headers from Steve can be adapted to
>> tests (Steve?)
> I don't know what that means.
See in libs/units/test/test_header. I'll add some explanation on the guidelines.
>> 5.1.2 Include each couple of header files in both orders
> Testing all possible orderings seems impractical. I could be wrong of
I don't know. What seems impractical for you? If it is the time, this can be reduced to the files of a library. But the interesting test is multi-library.
>> B* See if the difference between source and binary compatibility is desirable (ALL)
> I don't know what that means.
Extract from http://apr.apache.org/versioning.html
We define "source compatible" to mean that an application will continue to build without error, and that the semantics will remain unchanged.
Applications that write against a particular version will remain source-compatible against later versions, until the major number changes. However, if an application uses an API which has become available in a particular minor version, it (obviously) will no longer build or operate against previous minor versions.
We define "binary compatible" to mean that a compiled application can be linked (possibly dynamically) against the library and continue to function properly.
Similar to source compatibility, an application that has been compiled against a particular version will continue to be linkable against later versions (unless the major number changes). It is possible that an application will not be able to successfully link against a previous minor version."
>> C* See if the functional test can be written by a team of interested
>> users to don't overload the developers. These tests could in addition
>> include multi-library tests and be stored in a directory independent
>> from the developers one (Interested users?)
> Not a bad idea. Spreading the work around is always good. However, a
> very thorough developer is likely to write tests that exercise much more
> than users can, and looks at the corner cases. If we end up with
> redundant tests it will simply soak the testing resources to little
You are right. How I see that. If I were to start the functional test of a library, I will start from the current unit test of the library, and remove the parts concerning the implementation details. Test that are double, i.e. unit tests that are completly functional, the developer could remove them from its unit test list if he want.
Otherwise, functional tests (regression tests) could be run in alternation with unit tests.
>> D* See which configurations (deprecated or not) could be tested by the current test
>> team. (Release Manager Team?)
> What does "configuration" mean in this context?
If we introduce deprecation, we could need to run the tests with and without deprecation, as we do now with variant release and debug.
>> E* See how the following points can be automated and where this
>> information can be included in the documentation. A separated page
>> make possible that this task could be done by other people than the
>> developer. (Some one knowing the Track system?)
>> 3.3. Include the tracked tickets on the Release notes
>> 3.4. List the test cases associated to the trac system tickets
> Those are great ideas. The more cross-references, the better.
Is someone interested to see how this coud be done?
>> F* See how the dependencies between libraries can be automated
> I don't know what an "automated dependency" might be. Can you be more
> explicit about what you mean to do automatically?
I've see that the CMAKE team has do something to get the dependencies between the Boost.Libraries. If we add the Boost libraries versions we will have automaticaly what we are locking for.
>> In order to know if I'm going on the right direction, I would like the
>> interested people reply to this post stating for each section, if it
>> is OK[O], must be completed[C]/improved [I], must be updated [U], must
>> be removed [R] and of course any idea is welcome.
> Hum, that will take me a little while. Will try to get to it in the
> next few days, but in the US we're beginning the Thanksgiving holiday.
> That will cut down substantially on peoples' availability (definitely my
> availability) in the near term.
Take the time you need.
Thanks for your comments and have a good Thanksgiving dinner,
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net