Xiss,

It seems that you have some vc compilation troubles there, besides the fact the links I sent you are a bit outdated (they need only a few changes to run with a newer version of ublas). Anyway this example with vector of vector (you are right I meant vector of vector) takes about 0.09 secs for matrix size of 10000 x 10000. The same problem running on a compressed matrix takes 44 secs (a huge difference).

>I gave a try to generalized vector of coordinate and compressed matrices and they performed a little
>worse than a compressed_matrix. And by "using a generalized vector of coordinate matrices to intitally
> assemble the matrix (by basically changing the type you have) and then use that to fill (by using push_back)
> a compressed matrix" you mean using a generalized vector of vector to assemble the matrix (in the loop with +=s)
> and then push_back all elements into a compressed_matrix? I didn't try it because of course it'll be worse since without the push_backs its already slow.

As the example in the links, creating a vector of vectors is much faster and just pushing the results into a compressed matrix (assuming you need the underlying structure for a solver), will still be way faster than using plus assing on the compressed vector.

As I am not using MSVC lately, is there somebody that can confirm the compilation error?

Best
Nasos


Date: Tue, 20 Apr 2010 15:18:40 -0300
From: xissburg@gmail.com
To: ublas@lists.boost.org
Subject: Re: [ublas] Element-wise operations are really slow

Hi Nelos,

Thanks for you time. That sparse fill sample does the same thing I'm doing, a loop with +=s. I can't get to compile that code, I always get the error:
    E:\Boost\boost_1_42_0\boost/numeric/ublas/vector_of_vector.hpp(301): error C2668: 'boost::numeric::ublas::ref' : ambiguous call to overloaded function

I gave a try to generalized vector of coordinate and compressed matrices and they performed a little worse than a compressed_matrix. And by "using a generalized vector of coordinate matrices to intitally assemble the matrix (by basically changing the type you have) and then use that to fill (by using push_back) a compressed matrix" you mean using a generalized vector of vector to assemble the matrix (in the loop with +=s) and then push_back all elements into a compressed_matrix? I didn't try it because of course it'll be worse since without the push_backs its already slow.

I guess that must be something wrong there for this operation to be so slow, then there must be a solution where I won't need to change much of the current code. I'm quite desperate to get a solution to this problem...

Thanks again,


x

On Tue, Apr 20, 2010 at 8:57 AM, Nasos Iliopoulos <nasos_i@hotmail.com> wrote:
Hello Xiss,

maybe the examples on sparse fill can help you with that. Take a look at: http://www.guwi17.de/ublas/matrix_sparse_usage.html. It is probable that using a generalized vector of coordinate matrices to intitally assemble the matrix (by basically changing the type you have) and then use that to fill (by using push_back) a compressed matrix, will make it a lot faster. If you don't need the structure of the compressed matrix you can neglect the last step.

If you are sort in memory, another idea that might work (although it is algorithmically harder and I haven't really tested  it) is to define a vector of matrix blocks (each matrix being maybe 2*band x 2*band), that you use them to assemble the stiffness matrix and then progressively push them back in the compressed matrix like,
[B1 C1  0  0 ]
[A2 B2 C2 0 ]
[0 A3 B3 C3]
[0  0  A4 B4]
neglecting the zero entries. "Bs" will be diagonals, "As "will be at least upper triangular and "Cs" will be at least lower triangular (the least means that their diagonal elements maybe zero). This would need some custom algorithms to navigate in the blocks though.

Best
Nasos


Date: Mon, 19 Apr 2010 00:14:36 -0300
From: xissburg@gmail.com
To: ublas@lists.boost.org
Subject: [ublas] Element-wise operations are really slow


In my algorithms I have to read/write from/to each individual element of  a matrix, and this is making my application really slow. More specifically, I'm assembling a stiffness matrix in Finite Element Method. The code is like this:

    for(int i=0; i<m_tetrahedrons.size(); ++i)
    {
        btTetrahedron* t = m_tetrahedrons[i];
        t->computeCorotatedStiffness();

        for(unsigned int j=0; j<4; ++j)
            for(unsigned int k=0; k<4; ++k)
            {
                unsigned int jj = t->getNodeIndex(j);
                unsigned int kk = t->getNodeIndex(k);

                for(unsigned int r=0; r<3; ++r)
                    for(unsigned int s=0; s<3; ++s)
                    {
                        m_RKR_1(3*jj+r, 3*kk+s) += t->getCorotatedStiffness0(3*j+r, 3*k+s);
                        m_RK(3*jj+r, 3*kk+s) += t->getCorotatedStiffness1(3*j+r, 3*k+s);
                    }
            }
    }

Where m_RKR_1 and m_RK are both compressed_matrix<float>, t->getCorotatedStiffness0/1 just returns the (i,j) element of a 12x12 compressed_matrix<float>. If I don't compute the co-rotated matrices the simulation still works but incorrectly (linear strain only), but very fast (basically, by commenting the element wise operations in that code). Whenever I turn the co-rotational stuff on, it gets damn slow, and those element wise operations are guilty.

What am I doing wrong? Is there any faster technique to do that? Well, there must be one...


Thanks in advance.


The New Busy think 9 to 5 is a cute idea. Combine multiple calendars with Hotmail. Get busy.

_______________________________________________
ublas mailing list
ublas@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: xissburg@gmail.com



Hotmail has tools for the New Busy. Search, chat and e-mail from your inbox. Learn more.