From: Anup Mandvariya (anupmandvariya_at_[hidden])
Date: 2008-04-14 05:03:56
On 4/14/08, John Maddock <john_at_[hidden]> wrote:
> Anup Mandvariya wrote:
> > Hi All,
> > I have some queries related with boost.math library as ststed below:
> > a) Is it possible to write many of the functions in the boost.math
> > library (like beta and gamma functions) so that they can be
> > parallelized
> > irrespective of the "way of parallelization"(such as using OpenMP,
> > MPI, etc...) and the "environment" (like multicore or a cluster)?
> I don't know, you would need to devise an parrallelisation API that's
> independent of the underlying mechanism, given that OpenMP uses #pragmas
> MPI code I'm not sure that's really feasable. The alternative would be
> lot's of #if..#else logic I guess :-(
> > b) What is the possibility of extending boost.math libraries
> > (particularly beta and gamma functions) to generic libraries using
> > generic programming process?
> I'm not sure I understand what you mean - they are intended to be generic
> already - and work with any type that satisfies the requirements here
> So for example I already use them with NTL::RR and also have experimental
> versions which work with Boost.Interval and/or mpfr_class.
> Is this what you meant?
> HTH, John.
> Unsubscribe & other changes:
My second query was,
If is it possible to generalize these libraries so that they can
be parallelize both for shared-memory as well as distributed-distributed
-- Regards, Anup Mandvariya +919985330660 "Truth Must Have No Compromise"
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk