|
Boost : |
Subject: Re: [boost] Reply to questions about FFT template class
From: Nathan Bliss (nathanbliss_at_[hidden])
Date: 2013-05-29 06:47:40
Just to answer the previous questions about my FFT code...
I'm afraid it's only a basic FFT implementation. It may not be up to the standard of Boost and I was always prepared for this, so I may then have to submit it to a less prestigious open source forum on the web somewhere.
I made a mistake before about the precision. Mine agrees with the FFTW 2048-point results to a max error of 3 parts per million (max diff with FFTW value is 2.53E-06). Most of the outputs show zero or infinitesimal diffs. I've checked by math many times and I can only think it's because of performance-related approximations in FFTW, as the diffs seem too small to be algorithmic errors.
Unfortunately, the FFTW implementation is ~20 times faster than mine. FFTW can run a 2048-point in 6ms on my Ubuntu installation whereas mine takes approx. 140ms. On Windows, both implementations are an order of magnitude slower (5secs for mine versus 250ms for FFTW).
The boost multi-threading appears to make no difference to the speed, even though I've run it through GDB and with printfs to check that my code automatically spawns the optimal number of threads up to the max value given as the template parameter.
Some compilers may be able to optimise the C++ code and multi-threading to increase performance, though I do not know if it could ever approach FFTW's.
I have not implemented more advanced FFT features such as multi-dimensional FFTs or real-value-only optimisations, but I think the current API could facilitate users extending it to include forward/reverse FFT, bit-reversal, multi-dimensional FFTS (by manipulating the input and output vectors), etc. I've tried to make the code well organised, structured and commented so that hopefully users could customise it with their own optimisations for specific processor architectures.
I'll understand if the consensus is that it is not really good enough for Boost. Alternatively, I would be happy to share my code and share the credit if anyone else wants to help. Kind regards,Nathan
> From: boost-request_at_[hidden]
> Subject: Boost Digest, Vol 4012, Issue 1
> To: boost_at_[hidden]
> Date: Wed, 29 May 2013 05:38:51 -0400
>
> Send Boost mailing list submissions to
> boost_at_[hidden]
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.boost.org/mailman/listinfo.cgi/boost
> or, via email, send a message with subject or body 'help' to
> boost-request_at_[hidden]
>
> You can reach the person managing the list at
> boost-owner_at_[hidden]
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Boost digest..."
>
>
> The boost archives may be found at: http://lists.boost.org/Archives/boost/
>
> Today's Topics:
>
> 1. Re: SIMD implementation of uBLAS (Aditya Avinash)
> 2. Re: SIMD implementation of uBLAS (Karsten Ahnert)
> 3. Re: Request to contribute boost::FFT (Karsten Ahnert)
> 4. Re: SIMD implementation of uBLAS (Aditya Avinash)
> 5. Boost on ARM using NEON (Aditya Avinash)
> 6. Re: Boost on ARM using NEON (Antony Polukhin)
> 7. Re: Boost on ARM using NEON (Aditya Avinash)
> 8. Re: Boost on ARM using NEON (Tim Blechmann)
> 9. Re: Request to contribute boost::FFT (Paul A. Bristow)
> 10. Re: Boost on ARM using NEON (Victor Hiairrassary)
> 11. Re: Boost on ARM using NEON (Andrey Semashev)
> 12. Re: Boost on ARM using NEON (Tim Blechmann)
> 13. Re: Boost on ARM using NEON (Andrey Semashev)
> 14. Re: Boost on ARM using NEON (David Bellot)
> 15. Re: SIMD implementation of uBLAS (Rob Stewart)
> 16. Re: SIMD implementation of uBLAS (Mathias Gaunard)
> 17. Re: SIMD implementation of uBLAS (Aditya Avinash)
> 18. Re: Request to contribute boost::FFT (Mathias Gaunard)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 29 May 2013 11:03:50 +0530
> From: Aditya Avinash <adityaavinash143_at_[hidden]>
> To: boost_at_[hidden]
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID:
> <CABocMVpiKTED=tPZ4QtXBZAVdbwHMv+26LE+YXx9u2+HyWAFag_at_[hidden]>
> Content-Type: text/plain; charset=ISO-8859-1
>
> @Gaetano: Thank you for the comments. I'll change accordingly and post it
> back. I am using T because, the code need to run double precision float
> also.
> @Joel: The Boost.SIMD is generalized. Designing algorithms specific to
> uBLAS increases the performance. Odeint have their own simd backend.
>
> On Wed, May 29, 2013 at 10:36 AM, Joel Falcou <joel.falcou_at_[hidden]> wrote:
>
> > On 29/05/2013 06:45, Gaetano Mendola wrote:
> >
> >> On 29/05/2013 06.13, Aditya Avinash wrote:
> >>
> >>> Hi, i have developed vector addition algorithm which exploits the
> >>> hardware
> >>> parallelism (SSE implementation).
> >>>
> >>
> >> A few comments:
> >>
> >> - That is not C++ but just C in disguise of C++ code
> >> . SSE1 CTOR doesn't use initialization list
> >> . SSE1 doesn't have a DTOR and the user has to
> >> explicit call the Free method
> >>
> >> - const-correctness is not in place
> >> - The SSE namespace should have been put in a "detail"
> >> namespace
> >> - Use memcpy instead of explicit for
> >> - Why is SSE1 template when it works only when T is a
> >> single-precision, floating-point value ?
> >>
> >>
> >> Also I believe a nice interface whould have been:
> >>
> >> SSE1::vector A(1024);
> >> SSE1::vector B(1024);
> >> SSE1::vector C(1024);
> >>
> >> C = A + B;
> >>
> >>
> >> Regards
> >> Gaetano Mendola
> >>
> >>
> > See our work on Boost.SIMD ...
> >
> >
> >
> > ______________________________**_________________
> > Unsubscribe & other changes: http://lists.boost.org/**
> > mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>
> >
>
>
>
> --
> ----------------
> Atluri Aditya Avinash,
> India.
>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 29 May 2013 08:27:40 +0200
> From: Karsten Ahnert <karsten.ahnert_at_[hidden]>
> To: boost_at_[hidden]
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID: <51A59FDC.5050409_at_[hidden]>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 05/29/2013 07:33 AM, Aditya Avinash wrote:
> > @Gaetano: Thank you for the comments. I'll change accordingly and post it
> > back. I am using T because, the code need to run double precision float
> > also.
> > @Joel: The Boost.SIMD is generalized. Designing algorithms specific to
> > uBLAS increases the performance. Odeint have their own simd backend.
>
> odeint has no simd backend, At least i am not aware of an simd backend.
> Having one would be really great.
>
>
> >
> > On Wed, May 29, 2013 at 10:36 AM, Joel Falcou <joel.falcou_at_[hidden]> wrote:
> >
> >> On 29/05/2013 06:45, Gaetano Mendola wrote:
> >>
> >>> On 29/05/2013 06.13, Aditya Avinash wrote:
> >>>
> >>>> Hi, i have developed vector addition algorithm which exploits the
> >>>> hardware
> >>>> parallelism (SSE implementation).
> >>>>
> >>>
> >>> A few comments:
> >>>
> >>> - That is not C++ but just C in disguise of C++ code
> >>> . SSE1 CTOR doesn't use initialization list
> >>> . SSE1 doesn't have a DTOR and the user has to
> >>> explicit call the Free method
> >>>
> >>> - const-correctness is not in place
> >>> - The SSE namespace should have been put in a "detail"
> >>> namespace
> >>> - Use memcpy instead of explicit for
> >>> - Why is SSE1 template when it works only when T is a
> >>> single-precision, floating-point value ?
> >>>
> >>>
> >>> Also I believe a nice interface whould have been:
> >>>
> >>> SSE1::vector A(1024);
> >>> SSE1::vector B(1024);
> >>> SSE1::vector C(1024);
> >>>
> >>> C = A + B;
> >>>
> >>>
> >>> Regards
> >>> Gaetano Mendola
> >>>
> >>>
> >> See our work on Boost.SIMD ...
> >>
> >>
> >>
> >> ______________________________**_________________
> >> Unsubscribe & other changes: http://lists.boost.org/**
> >> mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>
> >>
> >
> >
> >
>
>
>
> ------------------------------
>
> Message: 3
> Date: Wed, 29 May 2013 08:32:15 +0200
> From: Karsten Ahnert <karsten.ahnert_at_[hidden]>
> To: boost_at_[hidden]
> Subject: Re: [boost] Request to contribute boost::FFT
> Message-ID: <51A5A0EF.9000806_at_[hidden]>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 05/29/2013 12:12 AM, Nathan Bliss wrote:
> > Dear Boost Community Members,
> > I am writing to ask if I could contribute a C++ FFT implementation to the boost library. I noticed that there is a boost::CRC so I thought it would be a good addition to have boost::FFT as well. MIT have an existing open-source FFTW library, but it is GPL-licensed which is much more restrictive than the Boost license. I should imagine a commercial FFTW license would be very expensive.
> > I have working FFT code which I have tested on Ubuntu Linux GNU gcc and also Visual Studio Express 2012 (MS VC++) for Windows. My code, when run as a 2048-point FFT, agrees with the MIT FFTW one to a max error margin of 6 parts per million.
>
> I would really like to see an FFT implementation in boost. It should of
> course focus on performance. I also think that interoperability with
> different vector and storage types would be really great. It prevents
> that the user has to convert its data types in different formats (which
> can be really painful if you use different libraries, for example
> odeint, ublas, fftw, ...).
>
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Wed, 29 May 2013 12:05:24 +0530
> From: Aditya Avinash <adityaavinash143_at_[hidden]>
> To: boost_at_[hidden]
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID:
> <CABocMVpBesKaSTznHmKACxNpPUwYtTs50vUaWeZo5qUUaK9K+Q_at_[hidden]>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Am sorry. My bad. It's boost.simd. Why isn't it included in boost? I have
> heard about it recently. Is there a chance that it is added to boost in the
> near future?
>
> On Wed, May 29, 2013 at 11:57 AM, Karsten Ahnert <
> karsten.ahnert_at_[hidden]> wrote:
>
> > On 05/29/2013 07:33 AM, Aditya Avinash wrote:
> >
> >> @Gaetano: Thank you for the comments. I'll change accordingly and post it
> >> back. I am using T because, the code need to run double precision float
> >> also.
> >> @Joel: The Boost.SIMD is generalized. Designing algorithms specific to
> >> uBLAS increases the performance. Odeint have their own simd backend.
> >>
> >
> > odeint has no simd backend, At least i am not aware of an simd backend.
> > Having one would be really great.
> >
> >
> >
> >> On Wed, May 29, 2013 at 10:36 AM, Joel Falcou <joel.falcou_at_[hidden]>
> >> wrote:
> >>
> >> On 29/05/2013 06:45, Gaetano Mendola wrote:
> >>>
> >>> On 29/05/2013 06.13, Aditya Avinash wrote:
> >>>>
> >>>> Hi, i have developed vector addition algorithm which exploits the
> >>>>> hardware
> >>>>> parallelism (SSE implementation).
> >>>>>
> >>>>>
> >>>> A few comments:
> >>>>
> >>>> - That is not C++ but just C in disguise of C++ code
> >>>> . SSE1 CTOR doesn't use initialization list
> >>>> . SSE1 doesn't have a DTOR and the user has to
> >>>> explicit call the Free method
> >>>>
> >>>> - const-correctness is not in place
> >>>> - The SSE namespace should have been put in a "detail"
> >>>> namespace
> >>>> - Use memcpy instead of explicit for
> >>>> - Why is SSE1 template when it works only when T is a
> >>>> single-precision, floating-point value ?
> >>>>
> >>>>
> >>>> Also I believe a nice interface whould have been:
> >>>>
> >>>> SSE1::vector A(1024);
> >>>> SSE1::vector B(1024);
> >>>> SSE1::vector C(1024);
> >>>>
> >>>> C = A + B;
> >>>>
> >>>>
> >>>> Regards
> >>>> Gaetano Mendola
> >>>>
> >>>>
> >>>> See our work on Boost.SIMD ...
> >>>
> >>>
> >>>
> >>> ______________________________****_________________
> >>> Unsubscribe & other changes: http://lists.boost.org/**
> >>> mailman/listinfo.cgi/boost<htt**p://lists.boost.org/mailman/**
> >>> listinfo.cgi/boost <http://lists.boost.org/mailman/listinfo.cgi/boost>>
> >>>
> >>>
> >>
> >>
> >>
> >
> > ______________________________**_________________
> > Unsubscribe & other changes: http://lists.boost.org/**
> > mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>
> >
>
>
>
> --
> ----------------
> Atluri Aditya Avinash,
> India.
>
>
> ------------------------------
>
> Message: 5
> Date: Wed, 29 May 2013 12:19:39 +0530
> From: Aditya Avinash <adityaavinash143_at_[hidden]>
> To: boost_at_[hidden]
> Subject: [boost] Boost on ARM using NEON
> Message-ID:
> <CABocMVpQqBGqduE6jj-KG=O5faV08GcoV=TOS_bXFLYWTkzKjA_at_[hidden]>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi, i want to develop boost.arm or boost.neon library so that boost is
> implemented on ARM.
>
> --
> ----------------
> Atluri Aditya Avinash,
> India.
>
>
> ------------------------------
>
> Message: 6
> Date: Wed, 29 May 2013 11:45:12 +0400
> From: Antony Polukhin <antoshkka_at_[hidden]>
> To: boost_at_[hidden]
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CAKqmYPbt+wq92BvLLn-5GUhFfmvWiWsAoqfpXK6XOmnRK7EtwQ_at_[hidden]>
> Content-Type: text/plain; charset=ISO-8859-1
>
> 2013/5/29 Aditya Avinash <adityaavinash143_at_[hidden]>:
> > Hi, i want to develop boost.arm or boost.neon library so that boost is
> > implemented on ARM.
>
> Hi,
>
> Boost works well on arm, so nothing should be really developed.
> But if you ate talking about SIMD for ARM, that you shall take a look
> at Boost.SIMD and maybe propose library developer your help.
>
>
> --
> Best regards,
> Antony Polukhin
>
>
> ------------------------------
>
> Message: 7
> Date: Wed, 29 May 2013 13:20:15 +0530
> From: Aditya Avinash <adityaavinash143_at_[hidden]>
> To: boost_at_[hidden]
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CABocMVo_DbCaxRPDOK4_1KMw8m7O54gXKgOqX+cmYRcBndNYDA_at_[hidden]>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Thank you!
> Can i develop a new kernel for uBLAS using NEON?
>
> On Wed, May 29, 2013 at 1:15 PM, Antony Polukhin <antoshkka_at_[hidden]>wrote:
>
> > 2013/5/29 Aditya Avinash <adityaavinash143_at_[hidden]>:
> > > Hi, i want to develop boost.arm or boost.neon library so that boost is
> > > implemented on ARM.
> >
> > Hi,
> >
> > Boost works well on arm, so nothing should be really developed.
> > But if you ate talking about SIMD for ARM, that you shall take a look
> > at Boost.SIMD and maybe propose library developer your help.
> >
> >
> > --
> > Best regards,
> > Antony Polukhin
> >
> > _______________________________________________
> > Unsubscribe & other changes:
> > http://lists.boost.org/mailman/listinfo.cgi/boost
> >
>
>
>
> --
> ----------------
> Atluri Aditya Avinash,
> India.
>
>
> ------------------------------
>
> Message: 8
> Date: Wed, 29 May 2013 09:52:57 +0200
> From: Tim Blechmann <tim_at_[hidden]>
> To: boost_at_[hidden]
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID: <51A5B3D9.2010807_at_[hidden]>
> Content-Type: text/plain; charset=ISO-8859-1
>
> >> Hi, i want to develop boost.arm or boost.neon library so that boost is
> >> implemented on ARM.
> >
> > Hi,
> >
> > Boost works well on arm, so nothing should be really developed.
> > But if you ate talking about SIMD for ARM, that you shall take a look
> > at Boost.SIMD and maybe propose library developer your help.
>
> from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure at
> this stage whether NEON will be provided as an open-source module"
>
> tim
>
>
>
> ------------------------------
>
> Message: 9
> Date: Wed, 29 May 2013 08:55:41 +0100
> From: "Paul A. Bristow" <pbristow_at_[hidden]>
> To: <boost_at_[hidden]>
> Subject: Re: [boost] Request to contribute boost::FFT
> Message-ID: <001f01ce5c41$ea88da30$bf9a8e90$@hetp.u-net.com>
> Content-Type: text/plain; charset="us-ascii"
>
> > -----Original Message-----
> > From: Boost [mailto:boost-bounces_at_[hidden]] On Behalf Of Nathan Bliss
> > Sent: Tuesday, May 28, 2013 11:12 PM
> > To: boost_at_[hidden]
> > Subject: [boost] Request to contribute boost::FFT
> >
> > I am writing to ask if I could contribute a C++ FFT implementation to the boost library.
>
> Definitely, a good templated C++ FFT would be very welcome.
>
> Would/does it work with Boost.Multiprecision to give much higher precision ? (at a snail's pace of
> course).
>
> Paul
>
> ---
> Paul A. Bristow,
> Prizet Farmhouse, Kendal LA8 8AB UK
> +44 1539 561830 07714330204
> pbristow_at_[hidden]
>
>
>
>
>
>
>
>
>
>
>
>
> ------------------------------
>
> Message: 10
> Date: Wed, 29 May 2013 09:15:36 +0200
> From: Victor Hiairrassary <victor.hiairrassary.ml_at_[hidden]>
> To: "boost_at_[hidden]" <boost_at_[hidden]>
> Cc: "boost_at_[hidden]" <boost_at_[hidden]>
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID: <22F316D0-E311-4BE3-94FB-11DE688C9CAB_at_[hidden]>
> Content-Type: text/plain; charset=us-ascii
>
> boost already works very well on ARM !
>
> If you want to use Neon extension, look at boost simd (I do not know if Neon is implemented yet, feel free to do it) !
>
> https://github.com/MetaScale/nt2
>
> On 29 mai 2013, at 08:49, Aditya Avinash <adityaavinash143_at_[hidden]> wrote:
>
> > Hi, i want to develop boost.arm or boost.neon library so that boost is
> > implemented on ARM.
> >
> > --
> > ----------------
> > Atluri Aditya Avinash,
> > India.
> >
> > _______________________________________________
> > Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>
>
> ------------------------------
>
> Message: 11
> Date: Wed, 29 May 2013 12:11:31 +0400
> From: Andrey Semashev <andrey.semashev_at_[hidden]>
> To: boost_at_[hidden]
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CAEhD+6BfavRaGgok9eMbDebCP1Hdg2T3Z3be7BWZTAZ1EDpfzg_at_[hidden]>
> Content-Type: text/plain; charset=UTF-8
>
> On Wed, May 29, 2013 at 11:52 AM, Tim Blechmann <tim_at_[hidden]> wrote:
>
> > >> Hi, i want to develop boost.arm or boost.neon library so that boost is
> > >> implemented on ARM.
> > >
> > > Hi,
> > >
> > > Boost works well on arm, so nothing should be really developed.
> > > But if you ate talking about SIMD for ARM, that you shall take a look
> > > at Boost.SIMD and maybe propose library developer your help.
> >
> > from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure at
> > this stage whether NEON will be provided as an open-source module"
> >
>
> Even if it's not openly provided by developers of NT2, nothing prevents you
> from implementing it yourself.
>
>
> ------------------------------
>
> Message: 12
> Date: Wed, 29 May 2013 10:37:48 +0200
> From: Tim Blechmann <tim_at_[hidden]>
> To: boost_at_[hidden]
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID: <51A5BE5C.6000705_at_[hidden]>
> Content-Type: text/plain; charset=ISO-8859-1
>
> >>>> Hi, i want to develop boost.arm or boost.neon library so that boost is
> >>>> implemented on ARM.
> >>>
> >>> Hi,
> >>>
> >>> Boost works well on arm, so nothing should be really developed.
> >>> But if you ate talking about SIMD for ARM, that you shall take a look
> >>> at Boost.SIMD and maybe propose library developer your help.
> >>
> >> from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure at
> >> this stage whether NEON will be provided as an open-source module"
> >
> > Even if it's not openly provided by developers of NT2, nothing prevents you
> > from implementing it yourself.
>
> yes and no ... if the nt2 devs submit boost.simd to become an official
> part of boost, it is the question if they'd merge an independently
> developed arm/neon support, if it conflicts with their business
> interests ... the situation is a bit unfortunate ...
>
> tim
>
>
> ------------------------------
>
> Message: 13
> Date: Wed, 29 May 2013 12:52:29 +0400
> From: Andrey Semashev <andrey.semashev_at_[hidden]>
> To: boost_at_[hidden]
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CAEhD+6D8h43AyTvg65ntZZm804R9p0fiOtARPp2G8cOJC64zog_at_[hidden]>
> Content-Type: text/plain; charset=UTF-8
>
> On Wed, May 29, 2013 at 12:37 PM, Tim Blechmann <tim_at_[hidden]> wrote:
>
> > >>>> Hi, i want to develop boost.arm or boost.neon library so that boost is
> > >>>> implemented on ARM.
> > >>>
> > >>> Hi,
> > >>>
> > >>> Boost works well on arm, so nothing should be really developed.
> > >>> But if you ate talking about SIMD for ARM, that you shall take a look
> > >>> at Boost.SIMD and maybe propose library developer your help.
> > >>
> > >> from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure
> > at
> > >> this stage whether NEON will be provided as an open-source module"
> > >
> > > Even if it's not openly provided by developers of NT2, nothing prevents
> > you
> > > from implementing it yourself.
> >
> > yes and no ... if the nt2 devs submit boost.simd to become an official
> > part of boost, it is the question if they'd merge an independently
> > developed arm/neon support, if it conflicts with their business
> > interests ... the situation is a bit unfortunate ...
> >
>
> I realize that it may be inconvenient for them to expose their
> implementation of NEON module (if there is one) for various reasons. But as
> long as Boost.SIMD is licensed under BSL, anyone can use and improve this
> code if he likes to, even if it means implementing functionality similar to
> the proprietary solution.
>
> It doesn't necessarily mean that the open solution will take the market
> share of the proprietary one.
>
>
> ------------------------------
>
> Message: 14
> Date: Wed, 29 May 2013 09:59:04 +0100
> From: David Bellot <david.bellot_at_[hidden]>
> To: boost_at_[hidden]
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CAOE6ZJEVebMZF90SHC_yHNxmBWy=uC3K1OFokcPrF2RM7=uYrQ_at_[hidden]>
> Content-Type: text/plain; charset=ISO-8859-1
>
> again, as I said, you develop and propose a patch to the ublas
> mailing-list. We're happy to see contributions from anybody.
>
> Now I must say that we are interested in Neon instructions for ublas.
> It has been on the todo list for quite a long time too:
> http://ublas.sf.net
>
> What I want to say is, apart from the 2 GSOC students and myself
> (general maintenance, official releases), nobody has a specific task
> assigned to.
>
> So if you want to contribute, you just work on it and talk about it on
> the mailing list so that people can be involved and help you.
>
> If little by little you contribute with amazing ARM Neon code, then
> people will naturally take for granted that you are the ARM Neon
> specialist for ublas. As simple as that.
>
> If someone comes with a better code than you then we will choose the
> other code. If you come with a better code than someone else, then we
> will choose your code.
>
> So please, contribute.
>
>
> Are you testing your code on a specific machine or a virtual one ?
> What's about things like Raspberry Pi ? I'd like to see benchmark on
> this little things. Maybe you can start benchmarking ublas on a tiny
> machine like that and/or an Android device and see how gcc is able to
> generate auto-vectorized code for this machine. Check the assembly
> code to see if Neon instructions have been correctly generated.
>
> Best,
> David
>
>
>
> On Wed, May 29, 2013 at 8:50 AM, Aditya Avinash
> <adityaavinash143_at_[hidden]> wrote:
> > Thank you!
> > Can i develop a new kernel for uBLAS using NEON?
> >
> > On Wed, May 29, 2013 at 1:15 PM, Antony Polukhin <antoshkka_at_[hidden]>wrote:
> >
> >> 2013/5/29 Aditya Avinash <adityaavinash143_at_[hidden]>:
> >> > Hi, i want to develop boost.arm or boost.neon library so that boost is
> >> > implemented on ARM.
> >>
> >> Hi,
> >>
> >> Boost works well on arm, so nothing should be really developed.
> >> But if you ate talking about SIMD for ARM, that you shall take a look
> >> at Boost.SIMD and maybe propose library developer your help.
> >>
> >>
> >> --
> >> Best regards,
> >> Antony Polukhin
> >>
> >> _______________________________________________
> >> Unsubscribe & other changes:
> >> http://lists.boost.org/mailman/listinfo.cgi/boost
> >>
> >
> >
> >
> > --
> > ----------------
> > Atluri Aditya Avinash,
> > India.
> >
> > _______________________________________________
> > Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>
>
> ------------------------------
>
> Message: 15
> Date: Wed, 29 May 2013 05:27:02 -0400
> From: Rob Stewart <robertstewart_at_[hidden]>
> To: "boost_at_[hidden]" <boost_at_[hidden]>
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID: <8ED4CF8D-139A-4232-9C18-968AD38161E7_at_[hidden]>
> Content-Type: text/plain; charset=us-ascii
>
> On May 29, 2013, at 2:35 AM, Aditya Avinash <adityaavinash143_at_[hidden]> wrote:
>
> > Am sorry. My bad. It's boost.simd. Why isn't it included in boost? I have heard about it recently. Is there a chance that it is added to boost in the near future?
> >
> > On Wed, May 29, 2013 at 11:57 AM, Karsten Ahnert <
> > karsten.ahnert_at_[hidden]> wrote:
> >
> >> On 05/29/2013 07:33 AM, Aditya Avinash wrote:
>
> [snip lots of quoted text]
>
> >>> On Wed, May 29, 2013 at 10:36 AM, Joel Falcou <joel.falcou_at_[hidden]>
> >>> wrote:
> >>>
> >>> On 29/05/2013 06:45, Gaetano Mendola wrote:
> >>>>
> >>>> On 29/05/2013 06.13, Aditya Avinash wrote:
>
> [snip even more quoted text]
>
> >>>>> Regards
> >>>>> Gaetano Mendola
> >>>>>
> >>>>>
> >>>>> See our work on Boost.SIMD ...
>
> [snip multiple sigs and ML footers]
>
>
> Please read http://www.boost.org/community/policy.html#quoting before posting.
>
> ___
> Rob
>
> (Sent from my portable computation engine)
>
> ------------------------------
>
> Message: 16
> Date: Wed, 29 May 2013 11:34:14 +0200
> From: Mathias Gaunard <mathias.gaunard_at_[hidden]>
> To: boost_at_[hidden]
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID: <51A5CB96.3040404_at_[hidden]>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 29/05/13 06:13, Aditya Avinash wrote:
> > Hi, i have developed vector addition algorithm which exploits the hardware
> > parallelism (SSE implementation).
>
> That's something trivial to do, and unfortunately even that trivial code
> is broken (it's written for a generic T but clearly does not work for
> any T beside float).
> It still has nothing to do with uBLAS.
>
> Bringing SIMD to uBLAS could be fairly difficult. Is this part of the
> GSoC projects? Who's in charge of this?
> I'd like to know what the plan is: optimize very specific operations
> with SIMD or try to provide a framework to use SIMD in expression templates?
>
> The former is better adressed by simply binding BLAS, the latter is
> certainly not as easy as it sounds.
>
>
> ------------------------------
>
> Message: 17
> Date: Wed, 29 May 2013 15:04:51 +0530
> From: Aditya Avinash <adityaavinash143_at_[hidden]>
> To: boost_at_[hidden]
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID:
> <CABocMVr1k13N6iBK7VOR4-G8ZPavf6eZL5qd=8cEFLGxDJ_waw_at_[hidden]>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Am sorry. My bad. It's boost.simd. Why isn't it included in boost? I have
> heard about it recently. Is there a chance that it is added to boost in the
> near future?
>
>
> ------------------------------
>
> Message: 18
> Date: Wed, 29 May 2013 11:38:45 +0200
> From: Mathias Gaunard <mathias.gaunard_at_[hidden]>
> To: boost_at_[hidden]
> Subject: Re: [boost] Request to contribute boost::FFT
> Message-ID: <51A5CCA5.80605_at_[hidden]>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 29/05/13 00:12, Nathan Bliss wrote:
> > Dear Boost Community Members,
> > I am writing to ask if I could contribute a C++ FFT implementation to the boost library. I noticed that there is a boost::CRC so I thought it would be a good addition to have boost::FFT as well. MIT have an existing open-source FFTW library, but it is GPL-licensed which is much more restrictive than the Boost license. I should imagine a commercial FFTW license would be very expensive.
> > I have working FFT code which I have tested on Ubuntu Linux GNU gcc and also Visual Studio Express 2012 (MS VC++) for Windows. My code, when run as a 2048-point FFT, agrees with the MIT FFTW one to a max error margin of 6 parts per million.
> > It is implemented using templates in a .hpp file and is very easy to use:
> > ------------------------------------------------------------------------------------------------------------[fft.hpp]
> > template<class FLOAT_TYPE, int FFT_SIZE, int NUM_THREADS>class FFT;///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////[Invocation example]
> > typedef double My_type;FFT<My_type, 2048, 4> user_fft;user_fft.initialise_FFT();get_input_samples(*user_fft.input_value_array, user_fft.m_fft_size);user_fft.execute_FFT();print_output_csv(user_fft.output_value_array, 2048);------------------------------------------------------------------------------------------------------------
> > It's uniqueness compared to other FFT implementations is that it fully utilises the boost::thread library, to the extent that the user can give the number of threads they want to use as a parameter to the class template (above case is for 4 parallel threads).
> > It is structured and organised so that users could customise/optimise it to specific processor architectures.
> > I've also tried to develop it in the spirit of Boost in that all class members which users should not access are private, only making public what is necessary to the API. My code is a decimation-in-time radix-2 FFT, and users could in theory use the existing API as a basis to extend it to more complex implementations such as the Reverse/Inverse FFT, bit-reversal of inputs/outputs and multi-dimensional FFTs.
> > I look forward to your reply.
> > Kind regards,Nathan Bliss
>
> You may want to give a look to the FFT functions bundled with NT2
> courtesy of Domagoj Saric.
>
> They also generate a FFT for a given compile-time size and use
> Boost.SIMD for vectorization (but no threads). Unfortunately the code is
> not so generic that it can work with arbitrary vector sizes, so it
> limits portability somewhat.
>
> <https://github.com/MetaScale/nt2/blob/master/modules/core/signal/include/nt2/signal/static_fft.hpp>
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>
> ------------------------------
>
> End of Boost Digest, Vol 4012, Issue 1
> **************************************
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk