Subject: Re: [boost] Formalising the review process into a well specified workflow (was: Re: [Boost-announce] [metaparse] Review period
From: Niall Douglas (s_sourceforge_at_[hidden])
Date: 2015-06-03 11:23:04
On 3 Jun 2015 at 15:51, Andrey Semashev wrote:
> The rest are informal discussions that cannot result in a library
> inclusion but may serve development. I don't see why we would need to
> formalize such communications.
I think the problems with the existing documentation of workflow are
much more obvious if you're submitting a new library.
> > 4. There is no shortage of free web tooling which can automate the
> > ticking of those boxes and walk library authors through the
> > formalised procedure. Indeed, Boost already is on Google Apps, and
> > Google Forms is one of the best free web tooling for forms. Unlike
> > most other Boost infrastructure needs (hint - is my volunteering to
> > upgrade Trac approved? If so, a ball needs to start rolling) where
> > our infrastructure requirements simply aren't there yet, for Forms
> > and workflow programming we are ready to go.
> If I'm not mistaken, you already proposed an automated review process,
> like a checklist or something.
Ehh, sorta. It actually has no forms nor checklist at all.
What I have specced out here is a nightly cron job which spiders
Robert's Incubator and github/boostorg for git repos. It then git
fetches each and looks for a special YAML file in the meta directory.
In that YAML file all the details required by the automated scripts
are detailed, or at least as many of the script passes as the library
author wishes. The automated scripts are then run upon the git repo,
doing things like clang-tidy passes checking naming conventions,
asking trac and github for how many unresolved issues and unmerged
pull requests there are etc. These are entered into a database of
A separate web service provides a way of displaying the database of
results according to any arbitrary database query. My idea was that
anyone wishing to include a live display of any indexed libraries
with any custom query into their website could do so. My initial
thoughts were for Robert's Incubator, and the Boost main download
page, but one could also generate Slack notifications, Atlassian
integration, or even just a RSS feed of recent updates to Boost
libraries. It's a web service reusable for any purposes people can
dream of, hopefully.
I also see no reason why the index wouldn't index any C++ library
desiring of it with whatever script passes they choose. Boost
libraries simply get a boost tag, that's all.
Anyway all that is shelved till after CppCon now that AFIO is up for
review end of July.
> I'm strongly opposed to any automated
> review scheme where expert opinions are not involved or required. I
> believe human review is the cornerstone of the whole review process
> and not any formal checks like directory layout, test coverage, VCS
> and build system used an so on. I'm not opposed to automating such
> checks but only to help the review manager and the author to assess
> whether the library is ready for inclusion (whether such assessment is
> done before or after the review).
I have no problem with the expert review as the final end stage for
the absolute top end libraries highly likely to enter the C++
I think relaxations of and alternatives to the expert review for
other kinds of intermediate stage review make enormous sense if Boost
is to stay relevant into the future. I definitely see no point in
there being exactly one single completely unchanged process since
2005 which may have made sense in 2005, but does not in 2015
especially after the stagnation in 2013 and recent exodus of so many
of the former big hitters.
-- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk