Boost logo

Boost Users :

From: Roland Richter (roland.richter_at_[hidden])
Date: 2021-01-18 17:13:39


Hei,

I am currently using the odeint-solver for calculating the right hand
side of a non-linear equation. For speeding up the calculation of the
right hand side, I decided to test the MPI-implementation. Now, after I
would prefer to keep the code backwards compatible, I did not want to
replace state_type with mpi_state, and instead split up the initial
vector by hand before calling odeint without telling odeint that I am
operating within an MPI context. I take care of all communications which
involve the vectors, and therefore I assumed that this approach should
work (especially after it worked correctly in small test programs).

Nevertheless, I encountered the issue that after a certain amount of
steps the step size between the different threads starts to differ by a
small amount, even though I am in the same iteration. As example:

step number   rank    step size

n                    1        0.00025
n                    0        0.00025 //Ok
n + 1              1        0.00051152
n + 1              0        0.000511523 //Not ok

I checked my program with valgrind for possible memory leaks/corruption
which could overwrite something, but nothing relevant came up. The
problem is repeatable.

Therefore, is the general idea correct at all, or am I doing something
wrong here by neglecting a possible communication between the different
threads within odeint?

Thanks!

Regards,

Roland


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net