Boost logo

Boost Users :

Subject: Re: [Boost-users] [BFL] Efficient neighbourhood search in trees
From: Vincent Spruyt (v.spruyt_at_[hidden])
Date: 2010-11-10 10:49:51


> On Thu, November 04, Simonon Lucanus J wrote:
>> On Wed, 3 Nov 2010, Jeremiah Willcock wrote:
>>
>>> On Wed, 3 Nov 2010, Jeremiah Willcock wrote:
>>>
>>>> On Wed, 3 Nov 2010, Vincent Spruyt wrote:
>>>>
>>>> [...]
>>>> Currently I naively iterate over each edge. For each edge, I find
>>>> the source and target vertex. Then, for both the source and the
>>>> target vertices, I calculate the average weight in a certain
>>>> neighbourhood using a recursive function as following simplified
>>>> pseudocode indicates:
>>>> [...]
>>>> However, this approach is rather inefficient as a lot of
>>>> calculations are redundant (takes about 10ms for 300 edges which is
>>>> way too slow for the purpose I had in mind). Any ideas for a better
>>>> solution?
>>>
>>> [...]
>> The way I suggested can also counting some edge weights multiple
>> times, so the update formula may need to be <v, n+1> = sum(w in
>> neighbors(v), <w, n>) - degree(v) * <v, n-1> or something like that.
>> You might want to do some small examples by hand to make sure you get
>> the formula right.
>
> He said it was a tree so there shouldn't be multiple paths from any vertex
> to
> another vertex in its neighborhood. You should be able to accumulate
> total
> weight from the bottom up traversal of the topological order starting at
> the
> leaves. Then do the reverse topological order and sum the two. Don't
> count
> your own weight twice. Store total weight as a vector indexed by depth
> and
> shift of the last element then merge vectors of out edges and add your own
> weight at the beginning of the vector for each node as you traverse up the
> tree.
> Delete the vectors on the out edges as you go up to save memory.
>[...]

Thank you both for your responses.
I eventually decided to use the naive approach anyway.
Actually, I did implement a dynamic programming version of the algorithm,
but
tracking the weight of the 'depth' neighbouring edges and merging them
when paths come together actually slows down the method a lot.

In the end, the dynamic programming based implementation of the algorithm
is faster than the naive whan if a depth of more than 130 is used because it
is
linear in complexity. However, in my application I only need a depth of 5 or
something in which case the naive approach is way faster.

Actually, the naive implementation does what I need in about 0.7ms and
not the 10ms I reported earlier. The 9.3ms difference here is only
because I used a filtered_graph as a view into a larger graph... which
obviously was not a good idea :)

So just copying the tree into a new graph and then naively solving the
problem
seems to be the best solution here.

Kind regards,
Vincent


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net