Boost logo

Glas :

Re: [glas] multi-dimensional arrays?

From: Peter Gottschling (pgottsch_at_[hidden])
Date: 2007-01-12 12:50:13


Hi Laurent,

Sorry for my delay too. Thank you for the detailed explanation. Your
issues are much clearer to me now. Still I don't know how to come up
with a big picture where everything fits in.

On 22.12.2006, at 12:15, plagne.laurent_at_[hidden] wrote:

> Hi Peter, Hi Karl, sorry for the delay.
>
> Since my own lib (with a new name LEGOLAS++) handles nested matrices
> and vectors
> I can reply when I had to address the issues you have mentioned. Up o
> now, I did
> not address the difficult and important matrix<->matrix (BLAS3) issue
> in the
> general case so I do not pretend to hold all the answers...
>
....
snip

>
>
>
>> Next question, do we need special concepts? Say e.g.
>> RecursiveTransposable for a recursive function transpose
>
> To me, every matrix can be transposed... So no additional concept
>
I choose Transposable for the sake of simplicity. Practically I can't
picture a matrix that is not transposable. However, good to know that
no extra concepts are needed.

An operation that is trickier is inversion of heterogeneously nested
matrices. Informally, it requires that the blocks/elements on the
diagonals are invertible and that all products for the update are
defined. Writing this formally with concepts will need some attention.

>> would recursively descent and break if it reaches a scalar value that
>> has no value_type.
>
> In LEGOLAS++, every matrix elements type (especially scalars) comes
> with a
> companion ElementDataDriver class which is used to ensure that every
> elements
> will model the ElementMatrix Concept. This concept includes the
> definition of
> transpose function, dot product, zero,...
>
I agree that this concept ElementMatrix including scalars is the key to
the define the recursive algorithms formally. Can you show us your
definition? To increase genericity of the algorithms it might be an
idea to split the concept into multiple, one for each operation.
Probably not all operations are used in each algorithm. In order to
keep requirement lists of algorithms short, one can than group these
concepts by refinement.

> Again Peter, I think that the matrix-matrix issue addressed by GLAS is
> much more
> difficult.
>
Did I say it's easy? I'm just searching for a prospective where as
much as possible is covered.

Best wishes,
Peter

>>> Extending this to block-matrices,
>
> Strictly speaking I do not deal with block-matrices but only with
> Matrix with a
> ElementMatrix concept which is modeled by matrices.
> The same thing for vectors.
>
> Hi Karl :
>
> I do not deal with blocked algorithm used in optimized xgemm
> implementation for
> Dense matrices. While this issue can be seen as an implementation
> policy, the
> sparsity pattern of a matrix (chosen be the user) is NOT an
> implementation
> detail which should be abstracted away. On the contrary I try to
> design my lib
> to allow a very detailed description of this sparsity
> pattern.
>
> To me, a matrix and a 2D vectors are very different types. Matrix are
> mainly
> linear operator acting on Vectors.
> The container of a dense matrix can eventually be a 2D array (vector)
> but the
> semantic level of a matrix and the corresponding container should be
> strictly
> separated since a matrix does not imply a container. Permutation
> matrices,
> finite different stencils etc.. are matrix and usually are not stored
> at all...
>
> Best,
>
> Laurent
>
> Quoting Peter Gottschling <pgottsch_at_[hidden]>:
>
>> Hi Laurent,
>>
>> Concerning nested matrix and vector types, we need to ask ourselves
>> first if we need special algorithms to handle it or if a least in some
>> cases the operators on the values already use the right algorithm.
>>
>> Second, is the use of operators efficient enough performance-wise? In
>> a matrix multiplication c(i, k)+= a(i, j) * b(j, k) is correct for any
>> type with consistent operators but not very efficient if a, b, and c
>> are matrices. Unless we have really sophisticated expression
>> templates.
>>
>> Next question, do we need special concepts? Say e.g.
>> RecursiveTransposable for a recursive function transpose
>>
>> How would it look like?
>>
>> concept RecursiveTransposable<typename Matrix>
>> {
>> where RecursiveTransposable<Matrix::value_type>;
>> // ...
>> };
>>
>> would recursively descent and break if it reaches a scalar value that
>> has no value_type.
>>
>> Conversely, something like:
>>
>> auto // ???
>> concept RecursiveTransposable<typename Matrix>
>> {
>> Matrix transpose(Matrix );
>> }
>>
>> Now we say this is defined for arithmetic scalars
>>
>> template<typename T>
>> where Arithmetic<T>
>> concept_map RecursiveTransposable<T> {}
>>
>> And now enable it from bottom up:
>>
>> template<typename T>
>> where RecursiveTransposable<T>
>> concept_map RecursiveTransposable<matrix<T> > {}
>>
>>
>> Extending this to block-matrices,
>> (A B)
>> (C D)
>> where A, B, C, and D are matrices of different type. Say we have a
>> type
>> for heterogeneous blocks
>>
>> template <typename MA, typename MB, typename MC, typename MD>
>> hb_matrix { ... };
>>
>> We can define:
>> template <typename MA, typename MB, typename MC, typename MD>
>> where RecursiveTransposable<MA>
>> && RecursiveTransposable<MB>
>> && RecursiveTransposable<MC>
>> && RecursiveTransposable<MD>
>> concept_map RecursiveTransposable<hb_matrix<MA, MB, MC, MD> > {}
>>
>> However, this are only some preliminary thoughts. We need to look,
>> which algorithms we want to implement recursively. And it looks like
>> that there can be a great difference between a syntactically correct
>> implementation and a high-performance implementation.
>>
>> Cheers,
>> Peter
>>
>> On 15.12.2006, at 05:49, plagne.laurent_at_[hidden] wrote:
>>
>>> Hi Karl,
>>>
>>> IMHO, the matrix< matrix<..> > can be more or less difficult to
>>> implement
>>> depending on the constraints you accept. As an example, if you decide
>>> that the
>>> library must remain open to third party codes and handle different
>>> kind of
>>> matrix data storage : this is pretty easy to do with simple matrix
>>> type
>>> (matrix<double>) because the return type of A(i,j) will be a simple
>>> ref to
>>> double (or const ref). In the 2D case, matrix< matrix<double> > A,
>>> the
>>> accessor
>>> operator A(i,j) will be naturally a ref to matrix<double>. And you
>>> loose the
>>> compatibility with third party data elements...
>>> One can avoid this problem but it is not a trivial issue.
>>>
>>> Another point in nested matrix issue is that it can help (again IMHO)
>>> to
>>> validate the very impressive work the GLAS team has done with vector
>>> space
>>> concepts (HI Peter). For example, the 0 and 1 matrix counter-part are
>>> to be
>>> efficiently treated.
>>>
>>> Last point is maybe not a so general issue : In my case, I have to
>>> handle sparse
>>> matrix with a complicated but highly structured sparsity pattern.
>>> This
>>> pattern
>>> can be express with a reduced set of simple sparsity patterns : e.g A
>>> =Dense<Sparse<Diagonal..> > > + Sparse<Tridiagonal<Dense<..> > >
>>>
>>>
>>> Best
>>>
>>> Laurent
>>>
>>>
>>>> I think this is an important point. Toon and I have thought about
>>>> this
>>>> for a long time. We will certainly talk about this in detail.
>>>> Providing
>>>> the container matrix< matrix<T> > is not hard. I recall that
>>>> algorithms
>>>> are application/interpretation dependent, and we had a lively
>>>> discussion
>>>> on the list on this topic. So, I agree that 'some' support is
>>>> needed,
>>>> although it is not yet clear to me what 'some' here means.
>>>>
>>>> Best,
>>>
>>>
>>>
>>> _______________________________________________
>>> glas mailing list
>>> glas_at_[hidden]
>>> http://lists.boost.org/mailman/listinfo.cgi/glas
>>>
>> ------------
>> Peter Gottschling, Ph.D.
>> Research Associate
>> Open Systems Laboratory
>> Indiana University
>> 135 Lindley Hall
>> Bloomington, IN 47405
>> Tel.: +1-812-855-3608 Fax: +1-812-856-0853
>> http://www.osl.iu.edu/~pgottsch
>>
>> _______________________________________________
>> glas mailing list
>> glas_at_[hidden]
>> http://lists.boost.org/mailman/listinfo.cgi/glas
>>
>
>
> _______________________________________________
> glas mailing list
> glas_at_[hidden]
> http://lists.boost.org/mailman/listinfo.cgi/glas
>
------------
Peter Gottschling
Research Associate
Open Systems Laboratory
Indiana University
135 Lindley Hall
Bloomington, IN 47405
Tel.: +1-812-855-3608 Fax: +1-812-856-0853
http://www.osl.iu.edu/~pgottsch