Hi,
I’m working on a project where
I’ve replaced Microsoft’s (terrible) heap allocator
with the Doug Lea allocator. There seems to be some cases where the
Windows XP allocator is 500 times slower than it
should be. This made my program take
minutes rather than seconds to run. To
try to understand what kind of allocation patterns cause the behaviour I wrote
a small console application (see below).
After playing around with different allocation patterns for half an hour
I found a case where the pathological behaviour appeared. I'm none the wiser about the problem. For example, moving the malloc(301)
statement before the 7 x malloc(1000) makes the
problem go away! However, given the
ease with which I rediscovered the problem, I wonder why there aren't more
complaints about Microsoft's heap allocator. With some allocation patterns I have seen the
allocator fall below 1kHz on a Pentium IV 2GHz
machine!
I've created a library that exports
alternative versions of operator new and operator delete, by using a def file
as follows
EXPORTS
??2@YAPAXI@Z
??3@YAXPAX@Z
I've had to do some hacky
things to make it work with the STL (because of msvcp60.dll) , but that's
another story!
Now my question : I want to use boost,
but need to work out how to make it use a different allocator. Will I need to rebuild the boost dlls (such as boost_python.dll), against my static library?
Regards,
David Barrett-Lennard
/////////////////////// heap allocation
test ///////////////////////////
#include <iostream>
#include <time.h>
int main(int argc, char* argv[])
{
for (int count=1 ; count <= 5 ; ++count)
{
int n1 = 0, n2 = 0;
clock_t start = clock();
{
const int
NUM = 10000;
void* L[NUM*7];
for (int i=0 ; i < NUM ; ++i)
{
for (int
j=0 ; j < 7 ; ++j) L[n1++] = malloc(1000);
void* leak = malloc(301);
++n2;
}
for (int
k=n1-1 ; k >= 0 ; --k) free(L[k]);
}
clock_t elapsed = clock()-start;
if (elapsed)
{
std::cout
<< "Time = " << elapsed << " ms"
<< " Rate = " << 1000 * (n1 + n2) /
elapsed << " Hz"
<< std::endl;
}
}
return 0;
}