Subject: Re: [boost] another approach to memory management that solves thereferential cycles problem
From: Achilleas Margaritis (axilmar_at_[hidden])
Date: 2009-02-25 14:35:25
> I think it would be great, if the proposed solution could be made to
> work. As you point out, there would be good reasons to use advanced
> memory management schemes that don't require weak pointers. However,
> "made to work" includes an acceptable amortized worst case bound for
> the runtime of the pointer/reference operations.
> I have a bit the impression that releasing the reference to an object
> deep down in the object graph might take O(N) operations, at least
> for the provided "prototype" implementation. I might be wrong, or it
> might simply be a bug in the "prototype" implementation that can
> easily be fixed.
You are correct. That is the reason why a proposed solution is 'lazy'
disposal: the object is placed in a container and checked later for
disposal. When checked like this, an object can be marked as reachable
or unreachable(i.e. the result of the scanning can be cached),
minimizing even further the number of operations required.
The lazy disposal solution has roughly the same amortized complexity of
the traditional garbage collector. In fact, I believe that the only
difference between the backwards scanning (i.e. my solution, scanning
from objects to roots) and the forward scanning algorithms are the same:
in either case, the graph of objects is traversed and each node is
visited once. That is, if the number of objects to be kept around is
equal or higher than the number of objects that are unreachable.
Perhaps there is a way to minimize the cost of the disposal algorithm:
the recursive algorithm that checks if an object is collectable can also
delete the object, if it finds that there is no path to a root ptr.
Another solution that might be possible would be to cache the result of
the collectable() function to a flag in the object, and then during
release to use that instead of calling the collectable() function again.
Another possible optimization is: if root ptrs are added to the front of
the ptr list of each object, the path to the roots would be minimized.
The list of pointers could also be sorted while the list of pointers is
scanned: root pointers could be moved to the front, so as that the
shortest path to roots is always the first to be checked.
Finally, the algorithm 'collectable()' could return the number of steps
to the root, from each path, and then optimize the list of ptrs
according to that, while scanning the ptr list: shortest path goes first.
> I also have the impression that the provided "prototype"
> implementation can sometimes detect a cycle and conclude from this
> cycle that an object is collectable, even if the cycle still has a
> path to a root reference. But again, I might be wrong, or it might
> simply be a bug in the "prototype" implementation that can easily be
Can you be a little more specific? the provided example has many circles
in it, yet all objects are collected.
The function 'collectable()' stops and returns false only if there is a
path to a root ptr. If there is a cycle, the function returns true, and
therefore the algorithm continues until all paths are exhausted.
>> The idea is not considered a panacea for all memory management
>> problems. It is only complementary to shared ptrs and manual memory
> Can you be more explicit about the "known" problems of your approach?
Besides the obvious problem of scaling mentioned above, there might be
some other issues:
1) it takes more memory. Each ptr becomes 16 bytes long. And each object
gets an extra 12 bytes.
2) it may be difficult to make it thread safe.