Boost logo

Boost :

From: Jean-Louis Leroy (jl_at_[hidden])
Date: 2025-05-02 23:02:08


Thank you for the analysis Joaquin.

I am beginning to remember...I wrote that part in 2018...My reasoning was that
there would be relatively few type_infos. Also they would be confined to the
initialized data segment. I could see that they looked consecutive, but I didn't
want to count on that. Also, in presence of shared libraries, I would expect
multiple disjoint ranges of type_infos.

Now that I have customization points, I can revisit the idea, e.g. a facet that
uses P(X) = (X - min(Xi)) / sizeof(type_info).

One of the motivations for the customization points was my concern that, some
day, someone would come up with a set of values for which the factors could not
be found in a reasonable amount of time. Then we could fall back on vptr_map.
Used in conjunction with virtual_ptr, the cost of the lookup can be amortized
over many calls.

I have a RDTSC-based benchmark that shows that the performance hit would be
small, if benchmarks can be believed. They compare the cost of a method call
with one virtual argument with a virtual function call, using different vptr
acquisition strategies:
https://gist.github.com/jll63/c06e33b4dba3702839c2454edc09a958


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk