## Glas :## Re: [glas] Block matrices |

**From:** Peter Gottschling (*pgottsch_at_[hidden]*)

**Date:** 2005-10-21 18:51:04

**Next message:**Wolfgang Bangerth: "Re: [glas] Block matrices"**Previous message:**Karl Meerbergen: "Re: [glas] Block matrices"**In reply to:**Karl Meerbergen: "Re: [glas] Block matrices"**Next in thread:**David Abrahams: "Re: [glas] Block matrices"**Reply:**David Abrahams: "Re: [glas] Block matrices"

Hi Karl and Wolfgang,

There different issues at the same time in your discussion. I start

with the simpler one. Diagonal matrices are normal matrices and should

behave so. dia_matrix(i, j) therefore exist for i != j and should be

0. Of course, we don't want to store all these 0s or any sparsity

structure. Thus, diagonal matrices are _stored_ as vectors but still

behave like matrices. One of GLAS's (and MTL's) intention is to provide

adaptors that let matrices behave like vectors so that we can call

vector operations on it (given the value_type fulfills its requirements

as Karl pointed out).

The topic of blocked matrices or matrices of matrices needs from my

perspective several distinctions.

A blocked matrix can be a matrix of matrices, e.g.

matrix<matrix<double, fixed::dimension<2, 2> > > m1(2, 2)

((a b) (c d))

((e f) (g h)) = m1

((i j) (k l))

((m n)(o p))

m1(0, 1) would be

(c d)

(g h)

and it will be most likely be stored as:

a, b, e, f, c, d, g, h, i, j, m, n, k, l, o, p

and the value_type would be matrix<double, fixed::dimension<2, 2> >

Something different is a blocked view of a matrix. Let's say we have

matrix<double> m2(4, 4)

(a b c d)

(e f g h) = m2

(i j k l)

(m n o p)

m2(0, 1) would be of course b. The elements will be stored in

alphabetical order and the value_type double.

Now, we can define a blocked view on m2: blocked_view<matrix<double>,

fixed::dimension<2, 2> > m3(m2)

m3's value_type is now matrix<double, fixed::dimension<2, 2> > and (0,

1) would also be

(c d)

(g h)

In contrast to m1(0, 1) the elements are certainly not stored

consecutively.

Another alternative are block storage schemes, e.g. matrix<double,

blocked<2, 2> > m4 (4, 4). The value_type would be double and m4(0, 1),

too. In contrast to m2 the elements would be stored like m1

a, b, e, f, c, d, g, h, i, j, m, n, k, l, o, p

Algorithms could access them as simple matrices or much better with

specialized implementations benefitting from better locality.

While reading this discussion about value_type I had another, maybe

crazy idea. We can think about a un_nestify_adaptor. Its value_type

would be the value_type of the inner matrix. (Now types become a little

ugly)

unnestify<matrix<matrix<double, fixed::dimension<2, 2> > > > m5( m1);

m5's value_type is double and it behaves like m2. However, it still has

m1's storage scheme. The implementation might be a little tricky and

the impact on performance is not yet completely clear to me, esp. for

iterators and cursors. However, this way we have more flexibility.

Best wishes

Peter

On Oct 21, 2005, at 3:32 AM, Karl Meerbergen wrote:

> Hi,

>

> Reading the reactions I think it may be wise to allow the value_type

> to be

> anything. The algorithms for vector of matrix, e.g. depend on the

> concepts

> that value_type applies to.

>

> This implies that some more concepts are needed for the value_type's

> in order

> to deal with any user defined value_type, possibly containers.

>

> Having said that, I still think that if the goal is the creation of

> block

> matrices, it should be a class by itself with the scalar values as

> value_type

> since the blocked matrix bahaves like a matrix.

>

> On the other hand, if the goal is to make a vector of matrices, it

> should

> behave as a vector of matrices.

>

> At this stage, the collections have some value_type that satisfies some

> concepts. If the value_type is a Ring, it has a * and +, and so, we can

> compute the dot() of two vectors.

>

> In order to use a vector of matrices in dot(), the matrix should also

> be a

> Ring: the matrix_type::operator*() (matrix_type) exists and returns

> some

> type. (at this stage, in glas this is

> operator_times_function<matrix_type,matrix_type>::result_type)

> The dot product is the sum of matrix products (using *).

>

> Similarly, norm_1 = sum of norm_1 of the matrices. This implies we

> have to

> provide norm_1() for scalar types: this is just abs().

>

> All clear? Agreed?

>

>

> Karl

>

> On Friday 21 October 2005 00:33, Wolfgang Bangerth wrote:

>>> I dont know if "nested matrices" is a clever way or not but this is

>>> the

>>> way we designed our LA library applied to deterministic transport

>>> codes.

>>>

>>> http://parasol.tamu.edu/asci/labfest-may05/lp-

>>> Generic_Programming_Experim

>>> ents.pdf

>>>

>>> Anyway, our point is that we have to deal with complex matrix

>>> structures. Symbolically, we have deal with objects like

>>>

>>> DenseMatrix< DiagonalMatrix< UpperTriangularMatrix < double > > > >

>>>

>>> In this respect the block view seemed a bit restrictive.

>>

>> What I meant to say is that if you do something like this, there is

>> no way

>> to access the element with _global_ coordinates (i,j), because the

>> outer

>> diagonal matrix can't just pass it through to its components (it

>> doesn't

>> know their sizes, for example). In effect, at least for element

>> access,

>> your outer matrix is more like an array than a matrix, and to write

>> into

>> the matrix requires you to step through its building blocks until you

>> got

>> to the innermost one.

>>

>> That being said, I can completely see the need for what you do (and we

>> have exactly the same things in our radiative transfer code -- what a

>> coincidence). I guess everything depends on whether you want the

>> object to

>> look more like a matrix or an array. I prefer the former view rather

>> than

>> the later, but that's fine because I can always get it by building the

>> outer classes myself and using only numbers as template arguments for

>> the

>> proposed GLAS classes.

>>

>> Best

>> Wolfgang

>>

>> ----------------------------------------------------------------------

>> ---

>> Wolfgang Bangerth email:

>> bangerth_at_[hidden]

>> www:

>> http://www.math.tamu.edu/~bangerth/

>>

>> _______________________________________________

>> glas mailing list

>> glas_at_[hidden]

>> http://lists.boost.org/mailman/listinfo.cgi/glas

>

> --

> ==============================================

> Look at our unique training program and

> Register on-line at http://www.fft.be/?id=35

> ----------------------------------------------

> Karl Meerbergen

> Free Field Technologies

> 16 place de l'Université

> B-1348 Louvain-la-Neuve - BELGIUM

> Company Phone: +32 10 45 12 26

> Company Fax: +32 10 45 46 26

> Mobile Phone: +32 474 26 66 59

> Home Phone: +32 2 306 38 10

> mailto:Karl.Meerbergen_at_[hidden]

> http://www.fft.be

> ==============================================

>

> _______________________________________________

> glas mailing list

> glas_at_[hidden]

> http://lists.boost.org/mailman/listinfo.cgi/glas

>

------------

Peter Gottschling

Research Associate

Open Systems Laboratory

Indiana University

301i Lindley Hall

Bloomington, IN 47405

Tel.: +1 812 855-8898 Fax: +1 812 856 0853

http://www.osl.iu.edu/~pgottsch

**Next message:**Wolfgang Bangerth: "Re: [glas] Block matrices"**Previous message:**Karl Meerbergen: "Re: [glas] Block matrices"**In reply to:**Karl Meerbergen: "Re: [glas] Block matrices"**Next in thread:**David Abrahams: "Re: [glas] Block matrices"**Reply:**David Abrahams: "Re: [glas] Block matrices"