Boost logo

Boost :

Subject: Re: [boost] Preview 4 of the Generic Geometry Library (ggl)
From: Simonson, Lucanus J (lucanus.j.simonson_at_[hidden])
Date: 2009-02-18 14:12:56


Barend Gehrels wrote:
>> The issue isn't whether something is there, it is how easy it is to
>> add. If each concept added to the library requires O(n) effort
>> where n is the number of extant concepts in the library it will
>> stifle to ability of the author to extend the library and kill off
>> the growth of the API at whatever level of workload the author can
>> handle.
>>
> One reason to keep the the number of geometries low. OGC defines three
> basic ones: point, linestring, polygon. We've added a few more: box
> because it is very useful in many cases, segment because it is used in
> many algorithms, ring because a polygon consists of them. The circle
> just because it was always there, actually it would have to be worked
> out to all kinds of curved geometries, another workload.
> Requiring 7x7 distance algorithms, intersections, overlaps, within
> determinations...

You see the problem then. I have interval, point, point3d, rectangle, prism, six kinds of polygon and three types of collection of polygonal data (axis-parallel, 45 degree and arbitrary angle). In order to provide all the expected functions between rectangle, the polygons, and the collections of polygons I have to cover a 10x10 space. If I add triangle it goes to 11. Because my template code can generate the 10x10 instantiations of functions with ~O(10) template function definitions I'm not that intimidated by the idea of adding a triangle concept at some future point. I simply make it a refinement of polygon and it suddenly works with literally hundreds of different functions I've already defined. There are a huge number of algorithms specific to triangles that rightfully belong in a generic geometry library and may even be of use in geo-spatial applications, such as veroni diagrams which are the dual graph of the delaunay triangulation. The design of the API should not be an undue impediment to extending the library to include functionality that is clearly within its scope.

> This is really interesting. We went from SFINAE to tag dispatching in
> january and were very very happy with it. Its explanation is a bit
> long
> to go over all again. However, the main advantage with the new
> approach
> is that it offers us much more flexibility, such as partial
> specializations for certain cases, or deriving from implementation
> structs. Using SFINAE we had to overload all combinations in separate,
> hardly distinguishable functions. I know we used it in a (slightly?)
> different way than you do, having the BCCL concept checks there as
> well.
> But it was complicated, gave often compilation problems, and when we
> converted it all this was much clearer, more logical and could be
> extended much easier.
> If it is adding extra one-line dispatch stuct's I'll take that for the
> advantages it gives. And maybe we can solve that in another way, we
> didn't work that out yet.
> It is however interesting that you feel it exactly the other way
> round,
> while we're working on the same kind of library :-)

I guess we're just solving the same problems in a different order. O(n^2) SFINAE guarded functions would be horribly worse than O(n^2) dispatch functions, no question.

SFINAE needs to be combined with other generic programming techniques in order to solve the O(n^2) function definition problem. One such technique is tag dispatching with concept refinement, as Dave A suggested for the assign() API, which combined a static assert on both arguments with tag dispatching on only one to achieve O(n) definitions. Static assert and SFINAE are interchangable for this purpose.

My approach to cutting down the order of effort to implement the API uses metaprogramming to annonymize the conceptual type. I can view a single polygon as a collection of polygons with only a single polygon in it, for instance, by passing its address and address +1 into an API that expects a concept that can provide an iterator range over polygons. I lookup whether to do that or call begin(), end() based on a metafunction that provides which functions I should call. I can then pass polygons into APIs that expect collections of polgyons using the same default traits definiton for both. I guard such APIs with an is_polygon_set_type<T> check. This metafunction lookup with SFINAE is equivalent to making the polygon concpet a refinement of collection of polygon through inheritance of tags.

>> If you have cartesian and spherical coordinate sytem polygons with
>> and without holes you have four polygon flavors.
> It does not feel to me that that is the problem. For example area. A
> holey polygon just subtracts its inner rings area from the area of its
> outer ring. There is only one area implementation strategy for
> cartesian. Then there is indeed one other strategy for spherical
> rings. However, for the (holey) polygon there is only one
> implementation, no
> double code, even no two dispatch struct's, not a single line. The
> coordinate systems are handled at another level, distinguished by
> strategies. (OK, that is done by tag dispatching as well, at another
> level, but does not relate to the holes anymore). So also for the
> simplify, or length, you'll found no code targeted to a coordinate
> system.
 
Ah, yes, that is different from the 90, 45 and general polygons. Your cartesian and sperical can each be converted to the other, putting them together in an equivalency class, whereas conversion from 45 to 90 is not generally allowable, making them equivalent in same contexts but not in others.

Regards,
Luke


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk