From: Dave Abrahams (abrahams_at_[hidden])
Date: 1999-12-09 09:01:24
>>As far as I understand the term, a node descriptor is just what you get from
>>dereferencing a node iterator. If you can get to the property from the
>>descriptor, you can of course get there from the iterator.
> ... because you can dereference the iterator?
Or because you can get at the descriptor with a property accessor. Take your
pick of interfaces.
> Or because the property
> accessors handles both iterators and descriptors? There is a subtle
> difference between these two: The former assumes that there is indeed
> a dereference operator ('operator*()') while the latter does not.
I'm not sure I understand your latter alternative. Is it just the same as
what I was saying?
> I'm still not convinced that a dereference operator for the iterators is
> really the right thing: Currently I'm using an int eg. for node iterator
> in my grid graph. This works just fine...
Right. A dereference operator would force users to write a complex (?)
wrapper. Of course, a templated wrapper which always returned itself as the
proxy might be possible....
>>But I don't see the use of the node descriptor to the graph algorithm. I
>>thought you had the same position. Has that changed?
> Yes, this has changed in some way: I was aware of the use of node and
> edge descriptors as an optimization technique (descriptors at least
> sometimes contain less data than iterators) but in a recent project it was
> necessary to introduce this concept to cope with certain temporary
> structures (the graph's structure was modified in the nodes on the fly
> in certain situations but this was just a temporary view). Descriptors
> are useful in several context. For example, in priority queues it is often
> sufficient to store a descriptor. ... or the reference to the parent node in
> a spanning tree doesn't have to be an iterator, the edges along a path,
You've lost me here. In what way would these differ from iterators? In the
sense that your iterators might be ints, but that you might need to get at
the actual "contents" of an edge for a path description?
I think a better name is needed for "descriptor". It is far too general.
Here's how I understand it. Please correct as neccessary and fill in holes:
node iterator = something which can be incremented through the collection of
all nodes and whose value uniquely distinguishes a node within the context
of a particular graph
[do you expect to be able to compare node iterators for equality?
What about op<()?
What about searches where the space of all nodes isn't easily captured --
answer this at the bottom!]
Important: you may need the graph object to get at the "contents" of the
node (this includes the node's edge list) from the node iterator.
node descriptor = something which refers to the "contents" of the node, as
opposed to its identity.
[Can you use a descriptor to distinguish identity?
This one is really a vague concept to me... please elaborate]
> However, this discussion is actually independent from the property
>>I hate the names "head" and "tail" for edge start and end points. They have
>>the opposite meaning for the arrows that we usually use to draw edges with
>>(the head is the pointy end of the arrow).
> I'm using them because they are short. Source and destination are fine
> with me and I don't really care much about these names.
They confuse me when I'm reading your posts. How about (start, end) or (src,
dest) for short pairs?
>>Do a best-first search over a graph labeling each node with its minimum cost
>>from a start node, while AT THE SAME TIME copying the graph into a new
>>structure. Here's some simple pseudo-code for the search (almost Python!):
>>bfs(first, visit, cost)
> "bfs" is a bad name for a best first search: Normally BFS refers to
> breadth first search ie. uses a FIFO, not a priority queue.
Touché. If I can criticize your names, you can criticize mine. But I was
only laying out the algorithm so readers could think about the issues. It is
>> q.push(0, first)
>> while (!q.empty()):
>> c, n = q.pop()
>> if not visited(n):
>> if (visit(n)) return
>> for e in n.outgoing():
>> q.push(c+cost(e), e.destination)
> This does not copy the graph, does it?
'course not. It is just the canonical best-first algorithm with no
>>There are a number of problems here. To begin with, you want to label nodes
>>with costs, so the visit function needs to get the cost somehow.
> 'visit' is a "callback". I woudn't do it this way... I would rather use an
> algorithm iterator.
Yes, but you had given little detail up to that point.
>> Also, the
>>obvious place to copy the graph would be the "visit" function, but you don't
>>get an edge there so you'd have to distort the algorithm so it visits edges.
> Yes, searches normally should put edges into the queue, stack,
> whatever: Just getting the next node is normally quite useless. You
> can provide some label to the nodes but this is about it. You can't
> even create a path this way because you can't tell from which node
> the current node was reached, not to mention that there are normally
> multiple edges between nodes (at least between some of the nodes in
> a graph).
>>OK, no problem... except that you don't visit nodes which have already been
>>visited so you would only ever get a single incoming edge in the graph's
>>"copy" (it would be a tree).
I didn't notice that you had any way to address this below. The crucial
difference is whether we are marking edges or nodes as visited, I think.
>> Dietmar's "algorithm iterator" might begin to
>>address the issue, but I wonder about its applicability in algorithms that
>>don't have a single, obvious "main loop" with an obvious point at which to
> It is indeed not at all easy to determine the right points where an
> algorithms should stop and there are often multiple points. The
> approach I'm taking in these cases is Duff's Device in the advance
> function (eg. operator++()): A template argument is used to select
> the break points using a bitmask. Each break point corresponds
> to a state in an underlying state machine defined by the algorithm
> (the state machines are there but normally not explicitly mentioned
> in the description of the algorithms). If the algorithm breaks in a
> certain state, this state is remembered and later used as an entry
> point into the algorithm. In this context, it is convenient to use
> Duff's Device to resume processing at the right point.
Doesn't this slow down the whole process somewhat? Since you go through the
same interface (op++()) to resume at any point, the algorithm would need to
dispatch to the correct phase at run time each time op++ was called.
> Since this thread is about the interface for property accessors, I'm
> not giving an example here. I can give examples in a thread discussing
> details of algorithm implementations...
>>I suggest we look at the above case carefully before pinning down too much.
>>It seems a tough nut to crack.
> It is a thought worth considering but I think it does not influence
> the interface for property accessors... Actually, I think it only has
> to be considered when actually implementing the algorithm and is
> also independent from the interface used by the algorithms.
Yes, I'm completely off-thread-topic.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk