Subject: Re: [boost] another approach to memory management that solves thereferential cycles problem
From: Achilleas Margaritis (axilmar_at_[hidden])
Date: 2009-02-25 18:03:36
> I guess that some sort of laziness is required to arrive at an
> acceptable amortized worst case bound for the runtime of the
> pointer/reference operations. So the question for me would be: "Is it
> possible to achieve an acceptable amortized worst case bound for
> runtime complexity of the pointer/reference operations". I have no
> problem if this costs additional memory and involves modification of
> this memory even for "read only" operations.
One 'truth' that can be said for this algorithm is that when an object
is found collectable and deleted, there is no need to check for
collectability in children nodes: they would have already been checked
> The "checked later for disposal" that is proposed above doesn't look
> "automatic enough" to me, but I haven't yet tried to understand what
> exactly you are proposing as 'lazy' disposal.
I mean that the check if the object is collectable or not can be
deferred to a later time...just like garbage collectors do.
> I would interpret this statement in the way that you think that it
> will be difficult to solve the scaling problem.
The scaling problem depends solely on the configuration of the graph of
objects. The worst case is to have N objects pointing to all other
objects, i.e. N objects having N-1 pointers to all other objects.
I am not well versed into graph theory, so perhaps someone else can
enlighten us if some sort of shortest path discovery algorithm exists
that can be applied to this case. Perhaps some algorithm that applies
weights over the paths of the graph?
Another approach would be to optimize the graph while scanning it, as I
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk