Boost logo

Boost :

Subject: [boost] FW: Boost Digest, Vol 2929, Issue 4
From: Bartlett, Roscoe A (rabartl_at_[hidden])
Date: 2010-06-03 19:29:15


David,

I have some responses below ...

> ------------------------------
>
> Message: 13
> Date: Tue, 01 Jun 2010 16:16:25 -0400
> From: David Abrahams <dave_at_[hidden]>
> To: boost_at_[hidden]
> Cc: "mathias.gaunard_at_[hidden]" <mathias.gaunard_at_[hidden]>
> Subject: Re: [boost] Review of a safer memory management approach for
> C++?
> Message-ID: <m2hblmcxuu.wl%dave_at_[hidden]>
> Content-Type: text/plain; charset=UTF-8
>
> ...
>
> > > At Thu, 27 May 2010 20:02:45 +0200,
> > > Ingo Loehken wrote:
> > > >
> > > > if I understand "hard to reason about" in the right why : like
> there
> > > > is no need for shared ownership at all, this also means that
> there
> > > > is no use for COM Programming - and of course there is.
> > >
> > > I think you don't understand it the right way. Shared ownership
> (at
> > > least in the presence of mutation) is hard to reason about because
> > > seemingly-local modifications can have non-local effects.
> >
> > [Bartlett, Roscoe A]
> >
> > Yes, but let's focus on the "essential complexity" of object sharing
>
> Okay.
>
> > and not get bogged down in the "accidental complexity" of memory
> > management. The approaches described in
> >
> > http://www.cs.sandia.gov/~rabartl/TeuchosMemoryManagementSAND.pdf
> >
> > make explicit the "essential complexity" of object sharing while
> > trying to eliminate the "accidental complexity" of memory management
> > (see Section 6.1 in
> > http://www.cs.sandia.gov/~rabartl/TeuchosMemoryManagementSAND.pdf).
>
> I'm all for making essential complexity obvious and eliminating
> accidental complexity, but I'm not sure I see a compelling example
> that Teuchos does that, at least not when compared with what I
> consider normal state-of-the-art C++ programming, which is mostly
> value-based, dynamic allocations are rare, and those that occur are
> immediately managed, e.g. by appropriate smart pointers.

[Bartlett, Roscoe A]

I don't want to get into an argument about the "proper" way to use C++ here (but I will a little below). In Item 1 of "Effective C++, 3rd Edition", Scott Meyers identifies four different major programming paradigms that are supposed to be supported by C++:

1) C: raw pointers, low-level built-in datatypes (int, double, etc.), statements, blocks, functions etc.

2) Object-oriented C++: Inheritance, virtual functions, dynamic allocation, RTTI, etc. (i.e. runtime polymorphism)

3) Template C++: Typical generic programming, template metaprogramming, etc. (i.e. compile-time polymorphism)

4) STL: Containers, iterators, algorithms, etc. and code like this (not just using STL) (closely related to #3 Template C++).

Of course any well written large C++ program is a mixture of all four of the above (and more).

What you seem to be arguing above is that most good C++ software will use templates in #3 and #4 but almost none of #2 object-oriented C++. Note that C++ was first designed in the early 80s primarily to support #2 object-oriented C++. I can see the reason for this bias for #3 and #4 given the nature of most of the Boost libraries and you can have that opinion if you would like.

However, there are clearly types of programming problems that call for more of #2 and less of #3 and #4 in the overall design. Languages like Java and Python would be typically used to write safe object-oriented programs in a productive environment (free of complications of dynamic memory allocation and most undefined behavior that is so easy to expose with current C++ approaches). I will argue below that more C++ programs are better served by using more of #2 object-oriented C++ and to a lesser extent template-based methods in #3 and #4 (except where it is called for in lower-level code).

Instead of arguing if we should be using #2 object-oriented C++ (runtime polymorphism) or instead prefer #3 template programming (static polymorphism), let's assume that we want C++ to do a good job of supporting #2 object-oriented C++ if we decide to use it. Therefore, the real question is if C++ is indeed supposed to support #2 object-oriented programming, does C++ it support it well enough so that I would choose to use C++ instead of Java or Python (assuming my programming team knows those languages)? By all reasonable measures, writing object-oriented programs in Java or Python is more productive because of garbage collection and the elimination of undefined behavior that is so common in C++ programs (I can find lots of references for these opinions and even data from studies to support this claim). However, to achieve performance in CSE software, I have to write some code in C/C++ or some other appropriate language. No-one has been able to demonstrate sufficient performance for CSE codes written entirely in Java or Python. Also, many of the "safer" and "more productive" OO languages (e.g. Java and Python) are not even available on some high-end HPC platforms. Even if they were, mixed language programming has lots of problems that discourages wide-spread use. That leaves C++ as the only fully portable viable language for high-performance CSE (sorry Fortran programmers).

I would argue that the current accepted approaches to writing OO programs in C++ (i.e. #2 OO C++ features) make it too easy to create programs with undefined behavior (i.e. segfaults or worse) and as a result, the paranoia of undefined behavior in C++, memory leaks etc. lead to lots of bad programming practices and designs (see Section 1 in http://www.cs.sandia.gov/~rabartl/TeuchosMemoryManagementSAND.pdf). What the approach described in the Teuchos MM report tries to do is to create a set of classes and idioms that can allow C++ to be used to safely and productively develop #2 OO C++ software closer to the safety and productively that you can get with languages like Java and Python. However, there is still too much "accidental complexity" in C++ to have it ever fully compete with productivity of writing in Java and Python for basic OO features (IMO). In any case, C++ offers other advantages over any other language that would have you choose it over these other languages.

> > Trying to over design a program to avoid all shared ownership is
> > what make C++ programming so unproductive and has all the negative
> > consequences described in Section 1 in:
> >
> > http://www.cs.sandia.gov/~rabartl/TeuchosMemoryManagementSAND.pdf
> >
> > Designs with object sharing can be much less complex overall than
> > designs without sharing. You just need decent tools to detect circular
> > reference problems (and that is what the Teuchos::RCP class has).
>
> Well, I fundamentally disagree with all of the above. Overuse of
> runtime polymorphism, thus dynamic allocation, and thus shared
> ownership (it's almost an inevitable progression) is one of the things
> that has made C++ programming unproductive, and those people I know
> who most zealously avoid it tend to be more productive than everyone
> else. IMO.

 [Bartlett, Roscoe A]

When runtime performance or other issues related to dynamic allocations and classic OO are not a problem, classic OO C++ programs using runtime polymorphism are typically superior to highly templated C++ programs using static polymorphism for the following (indisputable) reasons:

1) Well written OO C++ software builds and (more importantly) rebuilds *much* faster then heavily template code. This massively speeds up the development cycle. The difference can be orders of magnitude of time savings in recompile/relink times. I have seen cases where carefully using runtime OO can result in recompile/relink cycles take less than 20 seconds where using all template code would take 20 minutes. I am *not* exaggerating here at all.

2) Templated programs can result in massive object-code bloat over equivalent runtime OO programs. If you are really sloppy with templates and use it for everything, you can easily double, triple, or more the size of the object code and executables (see Item 44 in "Effective C++, 3ed edition").

3) The error messages you get when mistakes are made when using interfaces and virtual functions are far superior to the mess that most compliers put out when you make a mistake with implicit compile-time template interfaces. This is a huge issue. Runtime errors, as opposed to cryptic compile-time errors, can be handled by throwing exceptions with very good error messages that programmers can 100% control the quality of. Yes, of course you prefer compile-time errors to runtime errors all things being equal, and am not arguing with that. This is generally not true of template programming (but there are some tricks you can use to improve compile-time error messages, but not perfectly). For example, if you make a mistake with STL iterators/algorithms you can get very cryptic error messages that dumfound most (non-expert and expert) C++ programmers.

4) Well written OO C++ software allows for significant runtime flexibility. If you use dynamic libraries, for instance, you can add new implementations of objects without even having to relink customer executables. Also, you can fix many types of bugs in updated shared libraries without having to recompile/relink customer executables. In a large-scale software setting, this can be very helpful.

Believe me, I see the use of template C++ code with lots of static typing and lots of local logic and control but it is not the end all and will never replace classic OO C++ for most high-level designs and software architecture. I challenge anyone to refute this. C++ compilers will need to become many orders of magnitude faster that what they are today for this fact to change (but then the software will just grow that much larger and erase the compiler speed improvements).

> In my experience, designs with object sharing tend to increase
> complexity in all kinds of ways beyond the memory management issues
> that you describe. Perhaps the most important way is, they tend to
> obscure the overall computation being performed by hiding it in
> networks of interacting objects.

[Bartlett, Roscoe A]

This is not going to magically change by templating everything. If a lot of objects collaborate together to perform computations, it does not matter if you use static or runtime polymorphism; the code will be hard to figure out and you will need print statements and/or run it in a debugger to see what is really happening.

> I would like to see an example of a design with shared object
> ownership that is ?much less complex? than all other functionally
> equivalent designs that eschew shared ownership. I'm not saying such
> applications don't exist, but IME they are rare, and a concrete
> example would help to bring this whole discussion back to earth.

[Bartlett, Roscoe A]

One design where shared mutable objects likely makes things overall simpler and more natural is the PDE expression mechanism in the Sundance code (http://www.math.ttu.edu/~klong/Sundance/html/). If the clients all expect remote updates of shared objects, then things work great. The problem is when an object is changed by another client and you don't expect it to change (i.e. the classic problem of change propagation).

As for sharing of objects, it is critical for many types of software in the Computational Science and Engineering (CSE) domain. You have to share large objects in order to cut down on runtime and storage overhead. As one example, consider the Petra object model with the Trilinos packages Epetra and Tpetra (http://trilinos.sandia.gov/). In the Petra object model, you have objects with significant storage overhead like Maps and Graphs that are shared between many different objects like Vectors, MultiVectors, and Matrices. Yes, you could create deep copies of Maps and Graphs but that would waste a large amount of storage. The bottom line is that by sharing large objects like Maps and Graphs instead of making large numbers of copies, we can fit and solve much larger simulation problems on a computer than we could otherwise. You could template and statically allocate everything and you will never get around the fundamental need to share the data contained in large objects in these types of applications.

Note, however, that Maps in Epetra/Tpetra are shared as const immutable objects. That means that no-one can change a Map after it is first created. Therefore, all of the problems with unexpected updates goes away (as you mention above). However, the shared Maps must go away when they are not needed anymore and that is what the Teuchos::RCP class makes easy and robust. Under the covers, Tpetra uses Tpetra::ArrayRCP for similar purposes and much more. The situation with Graphs is different and issues of change propagation of shared objects are still present in some cases.

Come to think of it, most of the object sharing that used in the CSE software that I use and write mostly just share const immutable objects so problems of change propagation mostly goes away. However, there are some important use cases where all of the clients are not holding RCPs to const objects and the problem of change propagation remains. Again, to save in significant runtime and storage overhead, we can't just create deep copies of all of these objects. Templating and purely stack-based programs are not going to solve that problem.

The Teuchos MM approach, I believe, largely solves the memory management problems with dynamic memory allocation and object sharing while still yielding very high performance and mostly eliminating undefined behavior associated with the incorrect usage of memory. The most significant contribution of the Teuchos MM approach is the presents of Teuchos::Ptr and Teuchos::ArrayView and how they interact with the other classes.

Cheers,

- Ross


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk