|
Boost : |
Subject: Re: [boost] Review of a safer memory management approach for C++?
From: Bartlett, Roscoe A (rabartl_at_[hidden])
Date: 2010-06-04 20:55:12
David,
> -----Original Message-----
> From: David Abrahams [mailto:dave_at_[hidden]]
> Sent: Friday, June 04, 2010 4:47 PM
> To: Bartlett, Roscoe A
> Cc: boost_at_[hidden]
> Subject: Re: Review of a safer memory management approach for C++?
>
> At Fri, 4 Jun 2010 13:46:53 -0600,
> Bartlett, Roscoe A wrote:
> >
> > I am fine with the language allowing for undefined behavior.
> > Clearly if you want the highest performance, you have to turn off
> > array-bounds checking, for instance, which allows for undefined
> > behavior. What I am not okay with is people writing programs that
> > expose that undefined behavior, especially w.r.t. to usage of
> > memory. Every computational scientist has had the experience of
> > writing code that appeared to work just fine on their main
> > development platform but when they took to over to another machine
> > for a "production" run on a large (expensive) MPP to run on 1000
> > processors, it segfaulted after having run for 45 minutes and lost
> > everything. This happens all the time with current CSE software
> > written in C, C++, and even Fortran. This is bad on many levels.
>
> Absolutely. People need better tools for detecting memory usage
> errors. I would rather rely on tools like purify than build this sort
> of thing into a library dependency, but others' mileage may vary.
[Bartlett, Roscoe A]
As described in Section 3.2 in the Teuchos MM report:
http://www.cs.sandia.gov/~rabartl/TeuchosMemoryManagementSAND.pdf
tools like valgrind and purify miss too many errors to be sufficient to solely rely on and will *never* catch semantic memory usage errors.
Also, as described in Section 5.11.5, tools like valgrind and purify are far too expensive to run on anything but the smallest toy problem. Therefore, no-one will ever run these tools as part of (pre- or post-push) CI testing. If your memory problem only manifests itself on a larger problem, there is no way you can run valgrind or purify on the code. With Teuchos MM approach, the overhead is typically less than a factor of 10 (sometimes much less). Therefore, you can almost always afford to run a debug-mode build using the Teuchos MM classes, even on the largest problems. The Teuchos MM debug-mode is so cheap, that it is actually built into the Trilinos pre-push testing process that every developer runs before pushing to the main repo. Try that with valgrind and purify.
> > Value types, by definition, involve deep copy which is a problem
> > with large objects. If a type uses shallow copy semantics to avoid
> > the deep copy then it is really no different than using shared_ptr
> > or RCP.
>
> The whole notion of deep-vs-shallow copy is a fallacy. If you nail
> down what it means to make a âcopyâ---which will force you to nail
> down what constitutes an object's value---you'll see that.
[Bartlett, Roscoe A]
It is not a fallacy. The only obvious behavior for value semantics is deep copy such that if you do:
A a(...);
B b = a;
then any change to 'b' will have *no* impact on the behavior of 'a' at all, period. Anything else is not value semantics and will confuse people. Make it simple; most types should have either value semantics or reference semantics as described in Section 4.1 in the Teuchos MM report. Anything in between is just confusing and counter-intuitive.
- Ross
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk