Boost logo

Boost :

From: Lubomir Bourdev (lbourdev_at_[hidden])
Date: 2006-10-14 15:52:54

Rasmus Ekman wrote:
> The worries I try to express are:
> (1) Will the design work out in the long run? - GIL is not proven yet.

Rasmus Ekman wrote:
> >we want to see GIL used extensively and make sure the fundamentals
are absolutely solid
> Exactly. Can I say it now? Me too!

Design, documentation, broad domain, code quality - these are all valid
criteria for a successful boost submission.
But does boost require that a library be extensively used by many people
before being accepted? How many of the boost libraries have been
extensively used and proven in the long run upon their initial
acceptance? Can we please have the same criteria used for other boost
submissions be used for GIL.

That said, GIL is not a toy project. At least one feature in the next
generation of Photoshop will use GIL.
You can buy today Photoshop Elements and its Photo Organizer mode will
let you find human faces in your photographs to help with tagging:
This is using a very early version of GIL. We have published a CVPR
paper on our (GIL-based) face detector and shown that it compares well
against the fastest and most accurate published face detectors.
GIL has been ported to the camera phone platform. This feature
automatically zooms on the people in your picture:

We have been in contact with other companies that also use GIL.

So, having industrial-level applications using GIL is a good
Of course, that doesn't mean that the GIL design is rock-solid and
future changes won't be necessary. As with every library, we expect the
GIL code to evolve and get improved over time.

> (2) Where is GIL going? - If it is accepted, this gives more
> or less blank cheque to make additions - or not.

True for every other library. We are pursuing GIL and would like to see
it improved, but we cannot promise any specific improvements.

> From the download page:
> "It is in a very early stage of development; it is not
> documented and has not been optimized for performance."
> I'm sure you're partly just being modest here, but I didn't
> consider downloading it when the review was first announced,
> then didn't look back until yesterday.

Yes, when we said "not optimized for performance" what we meant to say
is that the code does not meet our internal expectations for
performance. That doesn't mean it is not fast enough for many real
applications. For example, our convolution uses compile-time recursion
to avoid the overhead of explicit loop. You may find that some of the
well established libraries don't have this.

> The ones where the comment just repeats the class or concept name.
> Eg, the full documentation for MutablePixelIteratorAdaptorConcept
> reads: "Pixel iterator adaptor that is mutable."
> Actually I'm suggesting they be removed to save us from
> skimming repetitions.

Yes, perhaps we should remove the comments for obvious classes.

> At the top levels of models and concepts there is hardly any
> documentation.
> It might be forgiven that Point, Pixel and Color have brief
> oneliners, but see Channel: "Concept for channel and mutable channel."
> Most users getting this far will know what it is, but having
> the word "color"
> somewhere on the page would not be affrontery would it?

Yes, we could add a few more sentences in the Doxygen structure to
describe the concepts channel, pixel, image, etc.
Please note though that a Doxygen structure is not even required for a
boost submission. Those concepts you mention are already described in
the design guide.

And again, let's compare with the other libraries. Granted, GIL is
larger than the average boost library, but we provide 40 pages of design
guide, 12 pages of tutorial, an hour of video presentation with 150+
slides and a full Doxigen browsing tree.
The lines of code of our documentation far exceed the lines of GIL code.

That said, your comment is well taken and we will work on improving the
documentation and making it more streamlined.
Thanks for your input.


Boost list run by bdawes at, gregod at, cpdaniel at, john at