|
Boost : |
From: Simonson, Lucanus J (lucanus.j.simonson_at_[hidden])
Date: 2008-04-30 20:18:02
In response to Phil,
>Although it's good to have the code, and no doubt some people who can
>scan C++ faster than I can will really appreciate it, what I'd love to
>see is more in the way of rationale and concept-documentation. For
>example:
I uploaded 23KLOC. I don't expect people to give up their day jobs just
to read my code, so your request seems quite sensible.
>- My recollection of the last part of the discussions the first time
>around was that they focused on the "nasty" way in which you made it
>possible to adapt a legacy struct to work with your library, and in
>particular how you added methods to the class by casting from a base
>class to a subclass. It would be great to see a write up of the
>rationale for that compared with the alternatives. Perhaps this could
>just be distilled out of the previous discussions. My feeling is that
>it may come down to this: what you've done is the most pragmatic
>solution for your environment, but it isn't something that could ever
>make it into the C++ standard library (since it used casts in a
>non-standards-compliant way). So, should Boost only accept libraries
>that could be acceptable for C++, or could Boost have a more liberal
>policy? Also, how much weight should be put on the "legacy" benefits
>of your approach? My feeling is that the standard library, and Boost,
>typically prefer to "do it right as if you could start all over again",
>rather than fitting in with legacy problems.
The rational was basically that it satisfied everyone's requirements at
the time. Boost community input wasn't gathered, so strict compliance
to what the standard says is safe wasn't a requirement. I can't answer
your question on what policy boost should have. The compatibility with
legacy code is really the crux of the issue. Were my design goals the
right ones? For internal development, I think they were. For boost
development, probably not, and I'm willing to change the design to
reflect the change in goals. I'm hoping for dialog on what the new
design should be to prevent unnecessary iterations.
>- Your library has a limited scope: 2D orthogonal and 45-degree lines.
>(And its name ought to include some indication of that.) I would like
>to see some exploration of in what way your interface (as opposed to
>your algorithms) is tied to this domain, i.e. to what extent your
>interface could be re-used for a more general or differently-focused
>library. For example, could you have a Point concept that could be
>common with Barend's library, allowing Federico's spatial indexes to be
>used with both? Or would do you require (e.g. for algorithmic
>efficiency reasons) a point concept that is inherently incompatible?
For me, requiring a point concept that a has an x() and y() member
functions is unnecessary and restricts the usefulness of the library. I
could obviously make such a requirement, but I would prefer to have
adaptor functions such as:
coordinate_type point_interface<T>::getX(const T& point);
which allows compatibility with anyone's point concept, rather than
requiring that everyone have syntactically compatible point concepts.
Federico's spatial indexes should be compatible with both libraries
already, provided they are conceptually 2D points, regardless of what
API the provide or concept they model. Even if I took out the
inheritance/casting, I still wouldn't require a specific API on the user
type. Shouldn't generic code ideally work with any type that is
conceptually a point, rather than only types that model the point
concept they set forth?
>- There are plenty of application domains for computational geometry.
>Presumably you're processing chip layouts. The other case that I can
>think of for orthogonal geometry is in GUIs (until you have windows
>with rounded corners). Everything else that I can think of (GIS,
>games, mechanical CAD) needs arbitrary angles or 3D. You may be
>proposing something that no-one else here has any use for, except as a
>stating point for more geometry libraries in Boost - in which case your
>concepts and other interface choices will be given a lot more attention
>than your algorithms.
This is why I suggested synthesis with other library proposals. We have
an adaptive scheme in one of our applications that uses the rectilinear
algorithms in the case that the input is purely rectilinear, uses the
45-degree algorithms in the case that the input contains 45-degree
edges, and uses legacy general polygon algorithms not included in my
submission for general polygon inputs. A good set of general polygon
algorithms (including numerical robustness) would complement what I am
providing.
I'm not sure your assessment of restricted applicability is entirely
true. I recently interviewed a PhD student who's thesis was on
networking and wrote a suboptimal algorithm for computing the
connectivity graph on a set of axis-parallel rectangles to model network
connectivity requirements between nodes. He managed to turn it into 7
different publications at mostly DARPA sponsored networking conferences
and workshops based upon the strength that it was at least better than
the previously published algorithm in the field. I asked him why he
didn't use an R*Tree and he had never heard of it. I asked him about
scanline and got a blank look. He had zero knowledge of computational
geometry. I think the applications do exist and that they are building
their own instead of using what is already out there. That's why I
think boost is a good place for it.
Luke
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk