Boost logo

Boost :

From: Anup Mandvariya (anupmandvariya_at_[hidden])
Date: 2008-04-14 05:03:56

On 4/14/08, John Maddock <john_at_[hidden]> wrote:
> Anup Mandvariya wrote:
> > Hi All,
> > I have some queries related with boost.math library as ststed below:
> >
> > a) Is it possible to write many of the functions in the boost.math
> > library (like beta and gamma functions) so that they can be
> > parallelized
> > irrespective of the "way of parallelization"(such as using OpenMP,
> > MPI, etc...) and the "environment" (like multicore or a cluster)?
> I don't know, you would need to devise an parrallelisation API that's
> independent of the underlying mechanism, given that OpenMP uses #pragmas
> and
> MPI code I'm not sure that's really feasable. The alternative would be
> lot's of #if..#else logic I guess :-(
> > b) What is the possibility of extending boost.math libraries
> > (particularly beta and gamma functions) to generic libraries using
> > generic programming process?
> I'm not sure I understand what you mean - they are intended to be generic
> already - and work with any type that satisfies the requirements here
> So for example I already use them with NTL::RR and also have experimental
> versions which work with Boost.Interval and/or mpfr_class.
> Is this what you meant?
> HTH, John.
> _______________________________________________
> Unsubscribe & other changes:

Thanks John,
My second query was,
If is it possible to generalize these libraries so that they can
be parallelize both for shared-memory as well as distributed-distributed

Anup Mandvariya
"Truth Must Have No Compromise"

Boost list run by bdawes at, gregod at, cpdaniel at, john at