> Hi, Jeremiah,
> 
> Thank you very much for your feedback.
>
> >>What error do you get?
>
> If I compiled it with both edges and vertices_equivalent:
>
> mcgregor_common_subgraphs_maximum_unique(*pInputGraphi, *pGraphj, true, my_callback,
>                         edges_equivalent(make_property_map_equivalent(ename_mapi, ename_mapj)),
>                         vertices_equivalent(make_property_map_equivalent(vname_mapi, vname_mapj)));

There are two problems here: "true" and "my_callback" are in the wrong
order (the callback comes first), and different named parameters in
Boost.Graph are separated by periods ("."), not commas (","). See if
those modifications make the code work with both filters enabled.

Thanks, I think the order of "true" and "my_callback" is correct, and I have to use "." rather than ",".
Now it can be compiled.



> >>How unique are your edge names? The algorithm gets very slow when it is
> >>hard to distinguish different subgraphs (i.e., if there are a lot of
> >>common subgraphs).
>
> The edge names are not unique. They can be classified into different groups, and identified by the group id.
> For example:
> e 1 7 0
> means there is an edge between vertex "1" and vertex "7", and it is named as "0".
> I use integer as the type of edge names. (And the same to vertex names).
>
> I did another small test to test the attached graph (only 26vertices and 28edges) with itself, which means there are
> a lot of subgraphs, and it enters into an infinite loop.

Are you sure it is an infinite loop and not just very slow? The
_maximum_unique version of the function needs to find all possible
common subgraphs before it can output any to your callback. Are you
sure you need the maximum-sized subgraph? If so, you will probably need
to wait.


This problem is a NP problem and this algorithm is a backtracking one.
I think it should be very slow rather than infinite loop, since according to some theoretical complexity analysis,
if two graphs have a big similarity, and most of the vertices/edges have the same labels, the execution time will
increase rapidly with the problem size.

Here I am using around 30 vertices/edges for the first graph, and 20-60 vertices/edges for the second one.
I think the problem size is a little big.

> My question is whether there is a way to speed up this kind of
> comparison with our build-in algorithm.

Any kind of uniqueness that can be used to filter pairs of vertices and
edges between the two graphs so that they do not match will make the
algorithm faster. In your actual application, what graph sizes do you
plan to use (for each side)? Do you really need common subgraphs and
not subgraph isomorphism/pattern matching or whole graph isomorphism?


Thanks for your suggestion, and I am also thinking about this.:)

Have a good evening,

Bin