|
Boost : |
Subject: Re: [boost] [SoC] BGL project idea
From: Eric Wolf (eric_at_[hidden])
Date: 2010-07-19 15:54:58
Dear Jeremiah,
Jeremiah Willcock <jewillco_at_[hidden]> writes:
>>>>> Is there a reason you're not using the current transitive_closure
>>>>> algorithm in BGL?
>>
>> An already computed transitive closure can't be used, as there is a need
>> to compute it in all cases, as some times it must be checked, whether an
>> edge is already in the to be computed closure.
>
> I don't understand what you're saying, and it seems to be contradicted
> by your next part about using Vladimir's transitive_closure code.
I meant to say that I can't compute the reduction of a graph from an
existing transitive closure. One could say, I think, the algorithm
computes the transitive closure and from time to time writes an edge to
the transitive reduction. But the decision, whether to write an edge to
the transitive reduction isn't based on properties of the transitive
closure, but on properties of the state of the construction process of
the transitive closure.
>>> I don't remember if that was just for the condensation graph or if it
>>> was a transitive closure of the whole input graph.
>>
>> Just for the condensation graph.
>
> That's what I thought. BTW, did you see
> <boost/graph/create_condensation_graph.hpp>? Is anything in there
> useful to you?
Yes! Great. Why hadn't Vladimir used that in its transitive_closure.hpp?
Maybe transitive_closure predates create_condensation_graph.hpp?
I wonder though, there doesn't seem to be documentation about this?
>>> You also need an explicit condensation graph as temporary storage for
>>> doing the reduction, too, or am I mistaken?
>>
>> I'm not sure about this. The transitive_reduction of the condensation
>> graph is only touched to insert edges. It is not read in the âcoreâ
>> algorithm. It is needed, to construct the transitive_reduction of the
>> original graph I think. (If the reduction of the original graph is
>> stored as an explicit graph, or if I write vertex pairs to an output
>> iterator isn't important though.)
>
> So you don't need to read back any of the transitive reduction of the
> condensation graph as part of creating it?
I don't need to read back any of the transitive reduction of the
condensation graph âas part of creating the transitive reduction of the
condensation graph.â Like that, it is true.
>>> 2 (harder, but not because of template tricks). Create two
>>> semi-implicit graphs, one for the closure and one for the reduction;
>>> return both from your algorithm. In this case, your graphs would only
>>> store a map from each vertex to its component (i.e., vertex in the
>>> condensation graph), which vertex is the representative of its
>>> component (for reductions), and the condensation graph (either its
>>> closure or reduction). You would then need to do the reduction even
>>> if the user doesn't need it, but you would not need to store the
>>> reduction or closure of anything but the condensation graph. You
>>> could also just have two variants of the algorithm, one that creates
>>> the condensation graph and its transitive closure, and another that
>>> creates both the condensation graph, its closure, and its reduction.
>>> That will likely save you code still, and named parameters can be used
>>> to determine in one function whether the reduction is necessary to
>>> compute (and this can probably be done with a run-time if statement on
>>> a property of a template argument, without needing any fancy
>>> metaprogramming beyond type_traits).
>>
>> And a two functions to construct the reduction and the closure from
>> the cond.gr.rd. and the cond.gr.cl. and the mappings. Yeah, thats a
>> possible way.
>
> I don't mean two functions. Here's what I had in mind in more detail:
>
> Assume:
> - component == SCC
> - G is the graph
> - .+ is the transitive closure operator
> - .- is the transitive closure/reduction operator
> - C is the condensation graph
> - comp(v) maps from a vertex in G to its corresponding component in C
> - next(v) maps from a vertex in G to another vertex in the same
> component (creating a loop in each component)
> - rep(v) is a bit flag that is true for one vertex in G in each component
>
> Now, as far as I understand, an edge (v, w) is in G+ iff (comp(v),
> comp(w)) is an edge in C+. An edge (v, w) is in G- iff either w =
> next(v) or rep(v) and rep(w) and (comp(v), comp(w)) is an edge in C-.
> You can define a graph type based on those computations. For the
> closure, you would store C+ and comp() explicitly, and then compute
> things like out_edges(v, G+) based on those using the formula above.
> For the reduction, you would store C-, comp(), next(), and rep(), then
> use the formulas to compute out_edges(v, G-). The other graph
> operations like vertices() would just forward to the original G.
> Using that kind of approach, you can create graph objects (things that
> model the graph concepts) that can be used to get the transitive
> closure and reduction of G while only storing information about the
> closure and reduction of C and a couple of other property maps on G.
Ah, yes that would be quite cool. Sounds like work, though. How about
internal properties of a graph? I don't understand those very good and
could imagine, that it would be better not to try to provide them
through G+ or G- as the type of G- or G+ would need to carry all those
internal properties?
Should the original graph be copied for the result of the algorithm? For
example to avoid life-time issues and maybe a mutable graph, that gets
changed after the algorithm has been applied?
Or would it be sufficient to write in the documentation, that a change
to a mutable graph invalidates the decorator? And that the user has to
make sure, that the original graph lifes longer than the decorator?
With boost/iterators/iterator_adapter.hpp the iterators should be
manageable, but I doubt, that I can do an (for exampel out_edge)
iterator with a amortized constant time operator++, as every out edge of
a vertex has to be tested by the formulas above. So if one or more edges
don't pass the test (aka are not edges of the transitive reduction)
operator++ will not be constant time. Is that an issue? Do you have a
idea to tackle that?
If I think about it, the result of the algorithm could be a factory for
filter_iterator or so ...
>>> Your options:
>>>
>>>> 1) Define a Graph type which absorbs all add_vertex and add_edge
>>>> operations like /dev/null in unix, then write one implementation
>>>> function like ...
>>>
>>> I don't like this option because I don't think the reduction can be
>>> removed completely, since I believe you need an explicit graph for the
>>> reduction when you are computing (you use already filled-in parts of
>>> it to create other parts, right?).
>>
>> Nope. The reduction of the condensation graph is only written, not read
>> in the core algorithm. It gets read if one tries to build the transitive
>> reduction of the whole graph.
>>
>> But I don't like that option, too. It is ugly.
>
> OK. When do you need to read it back to produce the closure of the
> full graph? Don't you just need equivalents of the comp(), next(),
> and rep() maps that I talked about above, plus a way to emit the edges
> of the reduction of the condensation graph?
Yes, thats exactly true.
>>>> 2) Use boost::enable_if like that: (In the following ... denotes an
>>>> ellipsis of zero or more parameters, not varargs.) ...
>>>
>>> This option is feasible and probably easier than you think it is;
>>> named parameters will take care of the dispatching, so you probably
>>> won't need enable_if.
>>
>> If you think it is feasible, then I would like to take that route. One
>> remedy still remains in my perception. If the user doesn't want the
>> transitive reduction, an unused local variable remains in the main
>> implementation function ...
>>
>> Can that be removed somehow by fancy template stuff?
>
> Something simple like the if statement I showed above should be enough
> to deal with that issue. If not, it's pretty easy to use mpl::if_ to
> remove the reduction code more completely.
>
>> What's your favorite choice?
>
> I like the implicit graph idea because of its elegance and reduction
> in output size, but it is more work to implement (the formulas I put
> down above are fairly simple but turning them into iterators is more
> work; if you want to go this way, I can give more detailed pseudocode
> closer to what you actually need to write to model BGL concepts).
Hmm. Are there documented BGL iterator concepts? I couldn't find those.
> Otherwise, either streaming edges to an output iterator (my option 1)
> or one of your options is reasonable. Your option 1 is probably
> equivalent to your option 2, since you can special-case the code based
> on your black-hole graph type and so not actually do the graph
> insertions when it is used; remember too that named parameters can be
> used so the user doesn't actually see the black-hole graph.
I think for my option 2 there isn't a need for a black-hole-graph type,
as the DONT_COMPUTE_TRANSITIVE_CLOSURE type is only used by the
interface functions and not by any function a user should access. So
there is no need to supply all the get, set, vertices, add_edge,
... functions.
My resumee: Your option 1 is easiest and probably best. Your option 2
makes a nice learning experience.
I'll try to do your option 2. If you have any ideas, how to make the
operator++ of the iterators amortized constant time, let me know.
Yours,
Eric
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk