Subject: Re: [boost] [gsoc18][ublas] Proposal to add advanced matrix operations
From: SHIKHAR SRIVASTAVA (shikharsri1996_at_[hidden])
Date: 2018-01-20 21:30:31
Thank you for the insight. Given the metrics, it surely looks like even
implementing those operations in ublas won't do any good.
I will look into the blas/LAPACK backend for ublas. I will look for a
possible proposal which can be completed in the given GSOC time frame.
Then again there is this question, is there any mentor available for this
project who can refine some of the requirements ?
On 21-Jan-2018 12:36 AM, "Artyom Beilis via Boost" <boost_at_[hidden]>
> On Fri, Jan 19, 2018 at 8:37 AM, SHIKHAR SRIVASTAVA via Boost
> <boost_at_[hidden]> wrote:
> > Hi everyone,
> > I am a 4th year undergraduate student pursuing a degree in Computer
> > and Engineering. I have strong programming experience in C++ through
> > internships, self projects and programming events. I wish to be a part of
> > gsoc18 under boost and am particularly interested in the linear algebra
> > library Boost.ublas.
> > The ublas library can be made more useful for Machine Learning
> > like recommendation systems, clustering and classification, pattern
> > recognition by adding some operations required in those.
> > I propose to add advanced matrix operations to ublas including -
> > 1. Triangular Factorisation (LU and Cholesky)
> > 2. Orthogonal Factorisation (QR and QL)
> > 3. Operations to find Singular Value lists
> > 4. Eigenvalue algorithms
> > 5. Singular Value Decomposition (SVD)
> > 6. Jordan Decomposition
> > 7. Schur Decomposition
> > 8. Hessenberg Decomposition
> I'm sorry to disappoint you but uBlas is not nearby useful library for
> real world
> machine learning applications because it exceptionally slow in comparison
> "real" BLAS libraries being used for such applications like
> OpenBLAS, Atlas or proprietary MKL.
> They all give you what you are talking about, they are tested
> very well and exceptionally fast.
> I mean uBlas is by 2-3 orders of magnitude slower than OpenBLAS or
> Atlas even for small matrices
> 8x8 GEMM - uBlas slower by 50 times than OpenBlas and 30 times slower than
> 128x128 GEMM - uBlas slower by 600 times thatn OpenBlas and 50 times
> slower than Atlas.
> So I don't think investing in implementation of algorithm that are
> already implemented in LAPACK
> libraries and have way better performance would actually will be
> helpful for real world applications.
> What you CAN do is to provide *Blas/LAPACK based backend for uBlas...
> Unsubscribe & other changes: http://lists.boost.org/