|
Boost : |
Subject: Re: [boost] [gsoc18][ublas] Proposal to add advanced matrix operations
From: Artyom Beilis (artyom.beilis_at_[hidden])
Date: 2018-01-20 19:06:26
On Fri, Jan 19, 2018 at 8:37 AM, SHIKHAR SRIVASTAVA via Boost
<boost_at_[hidden]> wrote:
> Hi everyone,
>
> I am a 4th year undergraduate student pursuing a degree in Computer Science
> and Engineering. I have strong programming experience in C++ through
> internships, self projects and programming events. I wish to be a part of
> gsoc18 under boost and am particularly interested in the linear algebra
> library Boost.ublas.
>
> The ublas library can be made more useful for Machine Learning applications
> like recommendation systems, clustering and classification, pattern
> recognition by adding some operations required in those.
> I propose to add advanced matrix operations to ublas including -
>
> 1. Triangular Factorisation (LU and Cholesky)
> 2. Orthogonal Factorisation (QR and QL)
> 3. Operations to find Singular Value lists
> 4. Eigenvalue algorithms
> 5. Singular Value Decomposition (SVD)
> 6. Jordan Decomposition
> 7. Schur Decomposition
> 8. Hessenberg Decomposition
>
>
Hello,
I'm sorry to disappoint you but uBlas is not nearby useful library for
real world
machine learning applications because it exceptionally slow in comparison to
"real" BLAS libraries being used for such applications like
OpenBLAS, Atlas or proprietary MKL.
They all give you what you are talking about, they are tested
very well and exceptionally fast.
I mean uBlas is by 2-3 orders of magnitude slower than OpenBLAS or
Atlas even for small matrices
8x8 GEMM - uBlas slower by 50 times than OpenBlas and 30 times slower than Atlas
128x128 GEMM - uBlas slower by 600 times thatn OpenBlas and 50 times
slower than Atlas.
So I don't think investing in implementation of algorithm that are
already implemented in LAPACK
libraries and have way better performance would actually will be
helpful for real world applications.
What you CAN do is to provide *Blas/LAPACK based backend for uBlas...
Regards,
Artyom
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk