Subject: Re: [boost] [OT] Open Source Forking and Boost (was Re: [SQL-Connectivity] Is Boost interested in CppDB?)
From: Dean Michael Berris (mikhailberis_at_[hidden])
Date: 2010-12-19 04:50:29
On Sun, Dec 19, 2010 at 12:26 AM, Jim Bell <Jim_at_[hidden]> wrote:
> Thanks for your thoughtful reply.
You're welcome, I do my best. :)
> On 1:59 PM, Dean Michael Berris wrote:
>> WARNING: This is a long post. The tl;dr version is: 1) follow the
>> Linux Kernel development model 2) Employ the web of trust system using
>> GPG keys 3) we need a Boost Foundation like the Linux Foundation 4) we
>> need to lower the barrier to entry for would-be contributors 5) use
>> Git/Mercurial for distributed development.
>> On Fri, Dec 17, 2010 at 11:23 PM, Jim Bell <Jim_at_[hidden]> wrote:
>>> The crux of Boost.Guild's debate. And so many topics touch on this.
>>> So how would you measure, or design a test, to determine how badly
>>> things would get screwed up under various scenarios?
>> [snip excellent but time-consuming ideas...]
>> A Boost foundation that has stakeholders funding it to ensure that
>> Boost keeps going as a project and compensate those people who would
>> do this full-time but otherwise can't because of their employment (or
>> lack thereof) would be a Good Thing (TM) IMHO.
> Compensating people would definitely bring a solution. So is the
> solution to pursue that aggressively? How many FTE's are on staff with
> Linux? How many would we ask?
I'm not sure about the Linux foundation, but as far as I know this
information is public. Just like any non-profit, that information
should be easy to come by as they have to disclose their operational
A quick look at the website says there are a number of fellows and
> Does the Linux kernel have metrics? How long does an average change sit
> in the queue?
Yes, and the (good) changes, depending on whether you get community
approval never sits longer than 2 weeks -- last I checked. This means,
your changes usually get upstream really quick especially if it's
within the active development window and the community likes what
you've done. The code review actively happens on the mailing list,
which is the central hub of the changes.
Larger projects though typically take a lot of time because they get
broken up into smaller chunks that get introduced into the kernel
pieces at a time over a number of releases -- this is the way they
deal with eventually breaking changes in the API or the internals; it
allows other developers to migrate their code that depends on the old
API into the new API over a period of time.
>>>> Thinking out loud here... one option might be for someone to say "I'm
>>>> going to try and give library X a decent update" and solicit the
>>>> mailing list for bug/feature requests, then carry out the work in the
>>>> sandbox and ask for a mini-review - at which point you're sort of
>>>> lumbered with being the new maintainer ;-)
>>> If someone is that motivated. But could something useful happen if ten
>>> people, each 1/10th as motivated, were to apply themselves?
>> I think the having to say it to the mailing list part and asking for
>> permission is the wrong way of going about it. I think, if someone
>> wants to do it, they should be able to do it -- and if their code is
>> good and the community (or the people that the community trusts) vote
>> with their pulls, then it gets folded in appropriately. For the matter
>> of having 10 people work on it, I don't think it will change the
> I'm much less concerned about a library's future direction than I am
> it's present quality.
But improving the quality (or maintaining it at least) *is* future
direction of a library. "Staying put" is a direction too (or lack
thereof, depends on how you look at time. :D).
>> If we use the current system of the maintainers being the BDFL's for
>> the projects they "maintain" and not allowing anybody else to take the
>> library in a different direction and letting the community have a
>> choice on the matter, is I think, not scalable. I would much rather
>> have 10 implementations of a Bloom filter, let the people choose which
>> one is better, and then have that implementation pulled into the main
>> release. The same goes for all the libraries already in the
> Ten implementations makes everyone spend time deciding on which to
> adopt, and multiplies the MIA maintainer problem. (Will my choice's
> maintainer go MIA? How do I know?) A peer-reviewed best helps me more as
> a user.
No, actually, your maintainers still choose which ones to bake into
the official release. This just means that the choice of which one is
a community-wide decision, which the maintainers would just grant.
Let me try to explain.
For example, someone else wants to work on a competing cpp-netlib
implementation that wants to make it into Boost. The current process
means that that person would have to do all the work outside of Boost,
submit it for review, have that review scheduled, and wait until the
review finishes and the review manager announces whether the library
is accepted or rejected.
Now in the process I'm proposing (which mirrors the Linux Kernel
development effort), anybody can absolutely just clone the Boost
libraries he's depending on, keep working on the implementation and
involve the whole Boost community in the effort if there's enough
interest. Now which version of the network library gets accepted is
just a matter of people voting by expressing their interest into which
implementation they want pulled into Boost -- this can happen anytime,
and can even be a prolonged process instead of just a week of reviews.
This on-going review process allows people to collaboratively develop
a library that eventually gets into Boost when it's mature enough to
be taken into the main Boost distribution.
This way, it's still peer-reviewed, and the review is an on-going
process. It's a lot more fluid, and the barrier to entry is
significantly lower. This also means who the maintainers are would be
a matter of election and expression of trust -- which means, for
example, I definitely would trust someone like Marshall Clow and
Steven Watanabe to be a maintainer of something like a Netlib
implementation in Boost. For example, I as the main developer of
cpp-netlib along with other developers who already are credited in the
implementation can continue as developers pushing the boundaries for
example instead of being bogged down by being just a maintainer of the
This turns the process upside-down -- anybody can then write a library
and get it up to Boost-level standards, and people who want to
contribute can contribute in a purely meritocratic environment. That
means if your code is good, your code gets in -- more code you have in
the project, the more trust you gain from other contributors. Then
when it seems official Boost-level implementation/design, then the
community can vote it in to be included into the main Boost
This also then allows people to have roles such as support engineers
who patch officially released code. Some people -- like me -- like
living on the bleeding edge and pushing the boundaries of what's
possible in a library. Others like to keep things stable and
contribute to fixing issues that appear in releases. This now means
Boost can have a major release number and have people keep
contributing to the version 2 line and keep the interfaces stable --
everybody who likes to push the boundaries can work on the next
version. Some people might even want to keep supporting major Boost
version releases for a longer period of time -- this way there's more
opportunities for people to contribute.
I hope that makes sense, and if in case it's not clear yet, I can
elaborate on it more -- most probably when I get to my final
destination as I'm writing this in the Hong Kong International
Airport, en route to Sydney, Australia. :D
Have a good one and I hope this helps!
-- Dean Michael Berris about.me/deanberris
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk