From: David Abrahams (dave_at_[hidden])
Date: 2003-11-01 16:08:53
Rob Stewart <stewart_at_[hidden]> writes:
> OTOH, by maintaining an issues list for each library, accessible
> from that library's main page at boost.org, would avoid the need
> to search the list archives or duplicate discussions on problems
> already visited. Thus, an issues list may make things nicer for
> the list and library users but I'm not sure that makes them
I rather like that idea for Boost.Python, actually. I have been
collecting that sort of information in disparate places but an issues
list could be a big help, especially for the many people who would
like to make a contribution to the libraries. Unfortunately, we don't
have any Boost working group or meetings so I wouldn't have the
benefit of others' experience in making decisions about the issues.
The process for dealing with the issues doesn't map 1-1.
>> No, such a procedure is not currently available in any shape or form. There
>> is no established way to invest a significant amount of your time (by
>> writing a short paper, for example) in order to _ensure_ that your
>> suggestion will receive any attention.
> You misunderstood me here. I was saying that the maintainers
> have their own To Do list of things that have been requested,
> bugs to fix, etc. An individual may apply great formality to
> that process unbeknownst to Boost.
An individual maintainer or user?
> I wasn't trying to imply that there is such an issues list
> managed by Boost, but I was acknowledging that one could be
>> The std-centric focus has always been quite upfront, actually.
Yes, since day one. It has always been, in part, Boost's raison
>> I agree that
>> the reviews seem progressively harsher, but I don't think that "stdism" is
>> the only reason.
> My impression has been that the talk of standardization and the
> number of folks thinking in terms of eventual standardization of
> code have increased.
Possibly because some of the libraries have been accepted for TR1?
> A library can be made worthy of standardization in isolation, but
> much more is learned by its exposure to others. That's part of
> what the reviews do now. However, raising the bar too high makes
> the cost of entry too high for many.
That might be a good thing (?) Do we want libraries that can't make
it over the bar?
> I'm advocating an initial review to ensure reasonable quality
> such that the library can enter into a period of widespread use
> and critique.
I think that's what the "preliminary submission" and "refinement"
phases of the process are supposed to do?
I don't think you can bet on or hope for widespread use before the
library is actually accepted, though.
> A subsequent review for final acceptance can be demanding, but the
> initial review need not be so. Concerns wrt standardization can be
> softened for the initial review, for example.
IMO that is already accomodated in the submission process.
>> (pardon the pun). And finally, experimental libraries with no
>> significant history and user base will naturally be viewed with
>> more suspicion.
> Isn't that just the type of library that should be accepted into
Only if it passes scrutiny and sufficient people think the design is
both good and useful.
> That gives such libraries wide exposure so folks can gain experience
> with the ideas and design of them.
Boost is not just a place to get exposure; there are plenty of other
ways to do that. People make outlandish claims for their libraries
("2500% speedup over std::string") on clc++m and get a lot of
attention ;->. Boost's peer-review process is a fundamental part of
why we have a reputation for high quality, and I would never want to
sacrifice that so we could turn it into a way for experimental library
authors to get attention.
> Andrei garnered great interest in Mojo without Boost's aid, but how
> many are there in the C++ world with the same name recognition?
> Boost shouldn't accept just any library that comes along, but is it
> necessary to be fully critical from the outset?
No, it is not. The Boost submission process is designed to allow a
gradual ramping up of scrutiny. If that's not happening it might be
because some people are too busy to look at incomplete submissions in
detail. I know that once something reaches the formal review stage,
if the documentation isn't readable, and understandable, and
reasonably complete, I simply don't have time to pursue the library
much further. Trying to find bugs in code without a specification is
kind of crazy.
> Maybe there should be two kinds of acceptance: experimental and
> full. (Pick better names if you like.) The former means a
> library is too new or controversial to gain the full faith and
> confidence of Boost, but it looks promising, is well designed and
> documented, and deserves exposure.
To get exposure, put it in the sandbox and make a list announcement
describing the library.
> The latter means a library has garnered wide use and has a proven
> design and implementation, and is thus worthy of the full faith and
> confidence of Boost.
That sounds like a lot more work for the Boost membership and
> Neither the sandbox CVS nor Yahoo serve the "experimental"
> purpose. The Yahoo files area is a place where lots of code
Probably a bad thing.
> and there is no visibility on the web site for the
> individual files/libraries in either the files area or the
> sandbox CVS.
Probably a good thing.
> (Not that there should be for all that's found in those places.)
>> Add to that that the Boost way is to accept once and then grant the author
>> almost free reign to modify the library without subsequent reviews, and that
>> Boost the library / the download has already grown to the point where we are
>> forced to put more emphasis on quality over quantity.
> That is a something that deserves attention, too. Bug fixing,
> refactoring of implementation details, most documentation updates,
> etc. don't usually warrant reviews. Interface changes, at least
> those of any significance, and significant documentation changes,
> OTOH, do warrant reviews.
Once again that makes more work for the administrative structure and
membership. What we're doing is working, IMO. There's plenty of
discussion. A library author/maintainer's sense of ownership is an
important ingredient that makes Boost work. Why should we change it?
> The current process is to hammer the design and documentation
> really hard up front
It doesn't look that way to me, and it's not supposed to be that
way. When we review, we scrutinize, yes.
> and cross your fingers that the ideas are indelibly imprinted on the
> submitter's mind so subsequent redesigns, refactorings, and other
> updates won't deviate too far from the goal. However, with no
> verification, too many library users will just "go with the flow"
> and accept what comes unless they are "forced" into a review mindset
> or they just wake up one day and realize things have gone awry.
What goes awry in Boost? Do you have a single example of this?
>> > Reviews of later versions should, rightly, be demanding, but that
>> > seems to apply to all reviews anymore.
>> Except that there are no reviews of later versions. ;-)
> Right, but there should be
>> > There are many libraries that are *high* quality, usable, largely
>> > portable, and reasonably well documented that are not LWG ready.
>> > Should that preclude them from submission to Boost?
>> There is no single correct answer to that question.
> There are only two answers to the question: yes or no. Individual
> libraries can still be judged on merit and subjective quality, but
> that doesn't preclude answering the question.
> If there are to be criteria for inclusion, they must be clearly
> stated. The current criteria given in Boost Library Requirements
> and Guidelines leave a lot of room for misinterpretation (one
> person's "quality programming practices" are not necessarily
> another's, for example).
Yep. Boost runs on a process which allows for human beings and their
subjective interpretations. Personally, I like being honest about
that, because it's realistic. You can't expect to find a list of
objective criteria which will actually work.
-- Dave Abrahams Boost Consulting www.boost-consulting.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk