Boost logo

Ublas :

From: Paul C. Leopardi (paul.leopardi_at_[hidden])
Date: 2008-08-06 20:52:01


On Thu, 7 Aug 2008, Jesse Perla wrote:
> I am trying to patch together a set of C++ libraries to provide a really
> straightforward C++ development toolkit /set of libraries for economists to
> use as an alternative to fortran/matlab. This is for a set of tool
> proposals that a number of people might follow for separate projects.
>
> I am sure this request has been repeated a million times, but I looked
> through mailing lists, etc. and cannot seem to find definitive wisdom about
> what is the best approach to take. Can anyone help out with ideas about
> where the industry is going? As far as I can see, the basic choice is
> between boost::ublas and blitz++.

Dear Jesse,
I went through the same exercise for my GluCat library in 2001-2002. Time and
C++ have moved on since, but essentially, at the time, the choice was between
Blitz and MTL.
http://www.osl.iu.edu/research/mtl/
MTL won because Blitz was not a matrix library but rather an array and tensor
library and it was not obvious how to do what GluCat needed, which was
primarily sparse matrix algebra, including inverses. The MTL mailing list
archive has been taken down, so I'm including the evaluation at the end of
this message.

After I implemented GluCat using MTL, Joerg Walter told me about uBLAS and it
had such obvious advantages that I switched GluCat to using uBLAS.
http://sourceforge.net/forum/forum.php?forum_id=239344
See also thread http://lists.boost.org/Archives/boost/2002/05/29119.php
and http://osdir.com/ml/lib.mtl.devel/2002-12/msg00013.html

As I said though, time has moved on. There is now an alpha version of MTL 4
which may be worth a look, if it is still being actively developed:
http://www.osl.iu.edu/research/mtl/mtl4/

---------- Forwarded Message ----------

Subject: Evaluation of MTL for GluCat
Date: Sun, 2 Dec 2001
From: "Paul C. Leopardi" <leopardi_at_[hidden]>
To: mtl-devel_at_[hidden]

Hi all,
I have decided to write my critique of MTL as an evaluation of MTL in
relation to the requirements of GluCat. This may make it easier to understand
the pros and cons. I'm especially interested to know if I have gotten
anything wrong. I may also post a longer evalution of POOMA, Blitz++ and MTL
to http://glucat.sourceforge.net
Thanks

Evaluation of MTL for GluCat, 2001-12-02
----------------------------------------

GluCat requirements for linear algebra libraries in decreasing order of
 importance.
0) Basics
It is assumed that the library is C++ and open source.
1) Linear algebra
GluCat needs to calculate inverses and solve linear systems.
2) Sparsity
GluCat needs to use sparse storage for both space and time savings, especially
 for basis generation.
3) Accuracy and stability of linear algebra
Glucat must accurately convert between the coordinate representation and the
 matrix representation.
4) Current support
GluCat will be publically available and will have a public dependence on the
 linear algebra library.
5) Simple semantics for assignment to uninitialized matrices
GluCat needs to use structures containing matrices where the shape is not
 known in advance.
6) Simple syntax for linear operations: matrix algebra notation
GluCat algorithms view both matrix-matrix and matrix-vector multiplication as
primitive operations. Syntax should be similar to Matlab.
7) Simple semantics for linear operations
GluCat algorithms should be expressed as simply as possible with no
pre-conditions, special cases or side effects. Semantics
 should be similar to Matlab.

Summary of evaluation of MTL
PROS
1) Linear algebra including LU solvers
2) Support for sparsity
CONS
1) Cannot use copy() with uninitialized variables
Does not meet requirement 5
2) No matrix algebra syntax
Does not meet requirement 6
3) Need to pre-allocate storage for sparse matrix operations
Does not meet requirement 7
4) Needs more sophisticated linear algorithms for accuracy and stability
Does not meet requirement 3

Decision:
Use MTL for GluCat, with the following workarounds:

1) Cannot use copy() with uninitialized variables
Avoid copy() where possible, initialize variables where possible, and use
B = matrix(A.nrows(), A.ncols()) before copy(A,B) otherwise.
2) No matrix algebra syntax
Use MTL syntax, with Matrix algebra syntax in comments.
3) Need to pre-allocate storage for sparse matrix operations
Use a modified version of MTL which has auto-resize within sparse multiply.
 The alternative would have been to create a temporary with sufficient space
 at every point where a matrix multiplication is done.
4) Needs more sophisticated linear algorithms for accuracy and stability
Add iterative refinement at the point where lu_solve is used.

Details
PROS
1) Linear algebra including LU solvers
2) Support for sparsity

CONS
1) Cannot use copy() with uninitialized variables.
MTL splits assignment functionality into a "shallow" operator= and a "deep"
 copy(). Unfortunately, copy() only copies content and not shape. It assumes
 that the matrix being copied to has already been initialized.

Related Bugs:
1.1) Documentation should describe this behaviour of copy() but does not
You either need to expect this as "normal" behaviour, or discover this
 behaviour by using the debugger and examining the source code.

2) No matrix algebra syntax
Lack of a matrix algebra syntax makes it necessary to translate matrix
 algorithms into MTL code, which resembles writing a program in assembly
 language. This is time consuming and error prone.

Example
// r = A*x - b;
Vector_t r(A.nrows(), 0.0);
mtl::mult(A, x, r);
mtl::add(mtl::scaled(b, -1.), r);

3) Need to pre-allocate storage for sparse matrix operations
If you want to create sparse matrices you need to know how much storage they
 will need before you operate on them. This is in contrast to eg. Matlab,
 where a sparse array is extended as needed.

Related bugs:
3.1) Sparse matrix multiply assigns 0 when it runs out of space
Sparse multiply outputs a cryptic message and assigns 0 if not
 enough storage has been assigned to sparse array. Either the message should
 be more informative or there should be an option to tell operations to
 automatically resize, such as an auto-resize policy class.
3.2) Documentation should mention this behaviour of multiply and does not
You either need to expect this as "normal" behaviour, or discover this
 behaviour by using the debugger and examining the source code.

4) Needs more sophisticated linear algorithms for accuracy and stability
For example, lu_solve uses Gaussian elimination with partial pivoting, but
 does not use iterative refinement.

See also MTL TODO List:
http://www.osl.iu.edu/research/mtl/todo_list.php3

On Wed, 28 Nov 2001 09:55, Paul C. Leopardi wrote:
> Hi all,
> I have been using MTL as the linear algebra engine for the GluCat library (
> http://glucat.sourceforge.net ) as a project for my Masters by coursework
> at UNSW. I am currently writing up my project report. One section will be a
> review and critique of MTL.
>
> In the critique of MTL, I will review some of the pitfalls I encountered in
> starting from a background which is more used to Matlab and Maple and then
> trying to apply this to writing algorithms using MTL. The main point will
> be that MTL forces its users to write code at a level lower than Matlab,
> Fortran 90 or even Blitz++, and that this creates many opportunities for
> errors and confusion.
>
> I intend to publish an early draft on this mailing list, to give other MTL
> users an opportunity to set me straight.
> Best regards

---------- Forwarded Message ----------

Subject: Re: MTL critique
Date: Sun, 3 Feb 2002
From: "Paul C. Leopardi" <leopardi_at_[hidden]>
To: szummer_at_[hidden]

On Sun, 3 Feb 2002 10:59, you wrote:
Hi Martin,
At this point I don't have much more than my evaluation of MTL versus the
requirements of GluCat,
http://www.osl.iu.edu/MailArchives/mtl-devel/msg00244.php
http://www.osl.iu.edu/MailArchives/mtl-devel/msg00246.php
but I will attempt to answer your questions below.
Best regards

> HI Paul,
>
> I read your preliminary critique of MTL on the MTL-devel mailing list,
> written 2001-12-02.
> Great job! I wish there were more work like yours.
> You mentioned you will post a more complete critique later - could you
> CC it to me, please?
>
> I would be interested in a critique containing the following (I am happy
> to wait for your final review, or if you know the answers to some of
> these questions already, I would be very interested!)

> * performance of MTL compiled with both gcc and KAI - I have only seen
> the authors' own benchmarks; how about an independent benchmark on the
> speed of MTL, compared to low-level libraries like ATLAS, or
> vendor-BLAS?
I do not have access to the KAI compiler and currently don't have time to
benchmark against LAPACK and BLAS - MTL is actually at a level in between
LAPACK and BLAS. While performance matters with GluCat, structure and
correctness were far more important for the initial build stage.

> * comparison of MTL with Blitz++, TNT, LinAl, LinAlg, and other
> competing C++ libraries.
Against Blitz++: Blitz++ does not support general sparse matrices and does
not provide linear algebra routines such as lu_solve, etc.
Against TNT: From what I saw of the TNT web site when I examined it a year
ago, TNT looked unsupported, or at least not currently maintained. I could be
wrong TNT does not support general sparse matrices.
I am not familiar with LinAl or LinAlg.

> * practical experience of using MTL - I have heard that it can be
> difficult to decipher compiler errors from the many nested template
> expansions
Deciphering compiler errors is initially difficult but gets much easier very
quickly. Start with small pieces of code. It is not compiler errors, but
runtime errors which are the real horror. Compile with debugging on (-g3 for
g++) and make sure you have a good debugger which can find its way through
all the long mangled names.

> * how does MTL foster bug-free research code: i) rate of bugs in MTL
> itself ii) how easy it is to create buggy code in MTL (due to weak
> types, no array bounds checking etc)
The low level of the library plus the "shallow copy" semantics means that it
is easy to create buggy code. Luckily most bugs show up as segmentation
violations.

You need to be very careful in writing code using MTL. For example, if MTL
had operators ( such as *= ) and value semantics, I could write:
    matrix_t result = unit(dim);
    const matrix_t* e = Some_Vector_of_Matrices;
    for (index_t k = folded_min; k <= folded_max; ++k)
      if (folded_set[k])
        result *= e[k];

instead of the following:
    result = matrix_t(dim, dim);
    unit(dim, result);
    const matrix_t* e = Some_Vector_of_Matrices;
    for (index_t k = folded_min; k <= folded_max; ++k)
      if (folded_set[k])
      {
        matrix_t temp(dim, dim);
        mtl::mult(result, e[k], temp);
        mtl::copy(temp, result);
      }

With MTL, you need to use mtl::copy to prevent aliasing, but when you do, you
must make sure that the matrix has already been given a size.
Because of the ever present likelihood of aliasing, I programmed very
defensively and probably ended up using mtl::copy more that was strictly
necessary.

> From a practical point of view: would you recommend using MTL right now?
> I am trying to decide what library to use.
It very much depends on your requirements. Mine were:
1) Must be a C++ template library
2) Must provide linear algebra operations
3) Must support general sparse matrices
MTL fit these requirements and Blitz++, TNT did not.
POOMA and deal.II were too big to evaluate quickly.

> Thanks,
>
> -- Martin

---