Boost logo

Ublas :

Subject: Re: [ublas] Matrix multiplication performance
From: palik imre (imre_palik_at_[hidden])
Date: 2016-01-28 11:05:52


This microkernel works for both float and double (though not yet for the mixed case. Will do that tomorrow):

template <typename Index, typename T>
void ugemm(Index kc, T alpha,
           const T *A, const T *B,
           T beta,
           T *C, Index incRowC, Index incColC)
{
    static const Index MR = BlockSize<T>::MR;
    static const Index NR = BlockSize<T>::NR;
    static const unsigned div = 32/sizeof(T);
    typedef T vx __attribute__((vector_size (32)));

    vx P[MR*NR/div] __attribute__ ((aligned (128))) = {};
    const vx *B_ = (vx *)B;
    for (Index i=0; i<MR; ++i) {
      for (Index l=0; l<kc; ++l) {
        for (Index j=0; j<(NR/div); ++j) {
          P[i * NR/div + j] += A[l + i*kc]*B_[l*(NR/div)+j];
        }
      }
    }
    T *P_ = (T *)P;
    for (Index j=0; j<NR; ++j) {
      for (Index i=0; i<MR; ++i) {
        C[i*incRowC+j*incColC] *= beta;
        C[i*incRowC+j*incColC] += alpha*P_[i * NR + j];
      }
    }
}
$ g++ -Wall -W -std=gnu++11 -Ofast -march=core-avx2 -mtune=core-avx2 -funroll-loops -g -DNDEBUG -DM_TYPE=double -DM_MAX=1000 -I gemm -o matprod4 matprod4.cc
$ ./matprod4
#   m     n     k  uBLAS:   t1       MFLOPS   Blocked:   t2      MFLOPS        Diff nrm1
  100   100   100  0.000312898      6391.86     0.000232107      8616.72     6.23429e-08
  200   200   200   0.00402665      3973.53      0.00104265      15345.5     6.51317e-07
  300   300   300    0.0140405      3846.01      0.00266711      20246.6     2.22925e-06
  400   400   400    0.0281966      4539.55       0.0057186      22383.1     5.30851e-06
  500   500   500     0.044483      5620.13       0.0103163      24233.6     1.03773e-05
  600   600   600    0.0677454      6376.81       0.0169294      25517.7     1.78643e-05
  700   700   700     0.105874       6479.4       0.0260756      26308.1     2.82859e-05
  800   800   800     0.160001      6399.96       0.0387259      26442.3     4.21266e-05
  900   900   900     0.229306      6358.32       0.0563846      25858.1      5.9881e-05
 1000  1000  1000     0.315917      6330.77       0.0762954      26213.9     8.18716e-05
$ g++ -Wall -W -std=gnu++11 -Ofast -march=core-avx2 -mtune=core-avx2 -funroll-loops -g -DNDEBUG -DM_TYPE=float -DM_MAX=1000 -I gemm -o matprod4 matprod4.cc
$ ./matprod4
#   m     n     k  uBLAS:   t1       MFLOPS   Blocked:   t2      MFLOPS        Diff nrm1
  100   100   100  0.000214128      9340.21     0.000151747      13179.8         33.6624
  200   200   200   0.00113299        14122     0.000676544      23649.6         348.119
  300   300   300   0.00716625      7535.32      0.00167322        32273         1200.01
  400   400   400    0.0151467      8450.68      0.00371294        34474         2854.09
  500   500   500    0.0259993      9615.63      0.00656574      38076.5         5558.22
  600   600   600    0.0382291      11300.3       0.0110199      39201.9         9565.28
  700   700   700    0.0521324      13158.8       0.0159092      43119.7         15144.4
  800   800   800    0.0779286      13140.2       0.0242163      42285.6         22632.3
  900   900   900     0.105746      13787.8       0.0354541      41123.6         32059.1
 1000  1000  1000     0.151049      13240.7       0.0459311      43543.5         43898.1
Cheers,
Imre

    On Wednesday, 27 January 2016, 1:39, Michael Lehn <michael.lehn_at_[hidden]> wrote:
 

 
On 23 Jan 2016, at 18:53, palik imre <imre_palik_at_[hidden]> wrote:

Hi All,
what's next?  I mean what is the development process for ublas?
Now we have a C-like implementation that outperforms both the mainline, and the branch version (axpy_prod).  What will we do with that?
As far as I see we have the following options:
1) Create a C++ template magic implementation out of it.  But for this, at the least we would need compile-time access to the target instruction set.  Any idea how to do that?
2) Create a compiled library implementation out of it, and choose the implementation run-time based on the CPU capabilities.

3) Include some good defaults/defines, and hope the user will use them.
4) Don't include it, and do something completely different.

What do you think?

At the moment I am not sure, but pretty sure, that you don’t have to rewrite uBLAS to support good performance.  uBLAS is a pretty finelibrary and already has what you need.  So just extend it. At least for 383 lines of code :-)
As I was pretty busy the last days, so I could not continue until today.  I made some minimal modifications to adopt the GEMM implementationto take advantage of uBLAS:
- The GEMM frame algorithm and the pack functions now work with any uBLAS matrix that supports element access through the notation A(i,j)- Only for matrix C in the operation C <- beta*C + alpha*A*B it requires that the matrix is stored row-major or col-major.  The other matrices canbe expressions. Here I need some help to get rid of this ugly lines of code:
    TC *C_ = &C(0,0);    const size_type incRowC = &C(1,0) - &C(0,0);    const size_type incColC = &C(0,1) - &C(0,0);
- Things like the following should work without performance penalty (and in particularly without creating temporaries):   blas::axpy_prod(3*A1+A2, B1 - B2, C, matprodUpdate) - And matrices A and B can be symmetric or whatever.  Of course if A or B is triangular a specialized implementation can take advantage- The storage format does not matter.  You can mix row-major and col-major, packed, …
Here is the page for this:
 http://www.mathematik.uni-ulm.de/~lehn/test_ublas/session4/page01.html
Please note, creating the benchmarks for the symmetric matrix-matrix product really takes long because the current uBLAS implementation seemsto be much slower than for general matrices.  So I reduced the problem size for now.  It would be helpful if some people could reproduce the benchmarksand also check different cases:
- different expressions- different element types, e.g. A with floats, B with double etc.  At the moment there is only a micro kernel for double.  The performance depends on thecommon type of A and B.  So with complex<..> the reference implementation.  But the performance should at least be constant in this case.
It would in particular be nice to have benchmarks from people with an Intel Sandybridge or Haswell as the micro kernels are optimized for thesearchitectures.  If interested I can extend the benchmarks to compare with Intel MKL.  For Linux there is a free non-commercial version available.

Cheers,
Michael