From: William E. Kempf (wekempf_at_[hidden])
Date: 2003-02-18 10:24:16
Douglas Gregor said:
> On Monday 17 February 2003 04:49 pm, Beman Dawes wrote:
>> At 02:00 PM 2/17/2003, Douglas Gregor wrote:
>> >They're always available here, regenerated nightly in HTML, DocBook,
>> FO, PDF, and man pages:
>> > http://www.cs.rpi.edu/~gregod/boost/doc/html/libraries.html
>> That really isn't very satisfactory. In the last hour for example,
>> pages on that web site have only been available sporadically. One
>> minute access is OK, the next minute the site or page can't be found.
>> No problems with other popular web sites.
> You probably caught me messing with the scripts (and therefore
> regenerating the documentation in-place).
Long term, this wouldn't be satisfactory. The scripts should be generated
in a seperate location to minimize the amount of time in which the online
distribution is impacted.
>> Having the docs locally on my own machine is just a lot more
>> satisfactory. Cheaper, too (my Internet access is metered service.)
> Well, you'll have the doc source on your machine, and can generate
> whatever format you want.
Not everyone will have the tools needed for generating docs. So I don't
think this is satisfactory either.
>> >We don't want to stick all of the generated HTML into CVS (too big).
>> If it is too big for the regular CVS, isn't it too big for the
>> distribution too? How big is big?
> The documentation isn't big (~650k, much smaller compressed). However,
> generated documentation tends to change a lot even with minor changes to
> the input, so unless someone has a good way to tell CVS "don't track
> any history for this file" then the CVS repository will get huge with
> the histories of these generated files.
A reasonable concern. But if we keep only release versions of generated
documentation in CVS, I don't think it will be too severe. Intermediate
doc changes would either have to be accessed directly from the web or
generated locally from CVS. Seems a fair compromise to this issue to me.
>> >Documentation changes will show up the next morning at the
>> >site. I'd like to add a link to this generated documentation on the
>> main page (so it is obvious that both the current release
>> documentation and
>> >current CVS documentation are available on-line).
>> Seems like a step backward. We have a simple model now. Click on CVS
>> "update" (or equivalent in your favorite client) and you get the
>> latest version of all files. CVS is the only tool needed.
> Sure, but we also have documentation that's inconsistent across
> libraries, not indexable, and unavailable in any format other than
> HTML. Our current simple model is simple for simple uses, but doesn't
> extend to any more advanced cases.
But we have to meet all the needs, both simple and complex. So I think
some sort of compromise is needed here.
>> It really isn't practical for many Boost developers to download a
>> whole tarball and unpack it every time they want to be sure their
>> Boost tree is up to date. Unpacking doesn't do things like getting rid
>> of obsolete files either. Need a way to just download the changed
>> files - and that sounds like CVS to me.
> It's my hope that developers will adopt BoostBook for their own
> documentation. Then any time they want to be sure their local copy of
> the documentation is up-to-date they just regenerate the format they
> want locally. It's really not much different from rebuilding, e.g.,
> libboost_regex with Boost Jam.
Actually, today it's much different. There's no Jam files for producing
the documentation, and several tools are required to run the makefiles
that not all developers will have on hand. In the future I expect we'll
be able to simplify the process, but you have to admit we're not there
>> So I think we need to figure out a way for generated docs to work in
>> the context of CVS. Or am I just being too picky?
> If I can stabilize the filenames a bit, it _might_ be plausible to use
> CVS along with the "cvs admin -o" command, which can erase completely
> certain revisions of a file. It would be possible for a little grim
> reaper script to come by and erase all but the most recent version of
> each file on a nightly basis, after checking in the new version. Sounds
> tenuous to me...
That's why I think the release snapshot compromise is better. This will
still have issues with differing file names, but will put a minimal impact
on the CVS repository.
>> >They will only break if the links try to link inside the
>> documentation files,
>> >e.g., to a specific anchor. Links that go directly to the library's
>> >point (index.html) will find the meta-refresh index.html that
>> >the generated documentation. I've checked with inspect: nothing
>> Well, but that's because there are only three libraries being
>> generated now. Some lib's docs do a lot more linking to other boost
> It's easy to link out of the generated documentation to static
> documentation (of course), and it's much easier to link amongst
> entities in BoostBook than in HTML. For instance,
> <libraryname>Tuple</libraryname> will link to the Tuple library,
> regardless of where the HTML is (even if it isn't generated);
> <functionname>boost::ref</functionname> will link to the function
> boost::ref, regardless of where it is. Broken link detection is built
> into the BoostBook XSL, because it emits a warning whenever name lookup
> fails (and won't generate a link). What we do now is much more
> involved: find the HTML file and anchor documenting the entity we want
> to link, put in an explicit link <a href="...">, and checking the links
> will have to be run manually prior to a release.
The only issue lies in the transition period when not all documentation
has been converted to Boost.Book and some of the "static" documentation
needs to link into a library that's been converted.
> Using generated documentation has some up-front costs: you'll need to
> get an XSLT processor, and maybe some stylesheets (if you don't want
> them downloaded on demand), and probably run a simple configuration
> command (now a shell script; will be in Jam eventually).
> The time savings from the generated documentation will come in little
> pieces: you won't need to keep the synopsis in sync with the detailed
> description, you won't need to keep a table of contents in sync, keep
> example code in a separate test file in sync with the HTML version in
> the documentation, or look up a link in someone else's library.
> BoostBook is meant to eliminate redundancy (except for XML closing
> tags; ha ha), and all the time we waste keeping redundant pieces in
I think everyone is convinced that Boost.Book is a good idea long term.
We just need to try and impact the whole project as minimally as we can
for the short term.
> There's an unfortunate Catch-22 with all this: to smooth the BoostBook
> learning curve would require further integration with the Boost CVS
> repository (not the sandbox), but we shouldn't integrate with Boost CVS
> until BoostBook has been "accepted" (whatever that means for a tool).
> But "acceptance" requires, at the very least, more developers to hop
> over the initial hump and to start seeing the benefits of BoostBook.
I think there's several of us interested who will be working on this when
time permits. But honestly, having it in the sandbox is at least a little
inconvenient... and to me it makes little sense if some released
documentation is going to depend on it.
-- William E. Kempf
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk