Subject: Re: [boost] [xint] Formal Review Result
From: Barend Gehrels (barend_at_[hidden])
Date: 2011-05-03 13:37:09
Thanks for your reply.
>>> The overall picture is still that the votes are split, and I did not use specific percentages to make a decision.
>> In this case it goes (fraction fully counted) from 5/7 to 10/8, flips
>> from negative to positive. Quite a difference. Even if the decision
>> stays the same, it requires an extra motivation for rejecting the library.
> This is not something I'd agree with. 10/8 is actually 55%. That's not
> a sufficiently wide margin that the simple counting of votes can reasonably
> determine outcome.
I meant going from -2 to +2, but I agree that a negative decision is
still valid with this percentage, votes are still split.
What happened has happened, apologies accepted. We have to prevent this
from happening again in the future.
Setting suggestions from this list on a row:
- a scoreboard is a simple mean and yes, I think it is useful
- consulting the library writer, as Joachim suggested, is even simpler
and will guarantee that no vote is missed (honesty assumed)
- marking a review as a "review" in the subject line as Matt suggested
is also quite simple and will also be very effective
- what happened with Joachim's suggestions of a Review Manager
Assistent? This sounds quite promising.
- and I don't know if this has been suggested before, but I would like
to see a review team, consisting (e.g.) of three people:
1) the review manager
2) the review manager assistent
3) one of the review wizards
Number 3) would "guarantee" that the review process is more or less
similar, and that not every review manager has to reinvent the wheel, or
(worse) takes decisions on other grounds. Number 2) would leverage the
tasks, e.g. carefully reading and classifying review results. The three
together always make a clear outcome (either 3/0 or 2/1 for either
acceptance or rejection - based on review votes, of course). As Joachim
said (IIRC), 2) would be a good point to start with, and required to be
1) or even required to submit a library for review.
With such a review team, reviews cannot be missed, review reports will
not be postponed for ever (as has happened in the past). I realize it
requires more people and communication, but on the other hand it
leverages the tasks of number 1) and it will lead to better review
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk