Boost logo

Boost Users :

From: Peter Dimov (pdimov_at_[hidden])
Date: 2006-08-27 10:27:20


Paul Davis wrote:
> Howdy,
>
> I've come across an odd segfault that originates in some of the boost
> code. (By originate, meaning thats where the stack trace points, I'm
> not sure if its me or boost thats wrong)
>
> Anyway, the weird part is that its only on my 64 bit machine.
>
> I took a look at where its segfaulting in
> boost::detail::atomic_exchange_and_add(). Its scary inline assembly
> stuff. Well, mostly I just don't know assembly so I haven't the
> slightest idea if its right or wrong. And obviously, its a platform
> specific header and what not so I imagine its limited to this area. I
> have had other weird segfaults that come and go from the atomic_* set
> of methods. I can't pin down exactly whats causing it. They mostly
> seem
> to be coming from storing shared_ptr's in STL containers. I've never
> had any problems with it before so I'm assuming its just a relatively
> untested section of code.

It's being tested quite extensively, but some problems are triggered only in
very rare circumstances depending on the optimization level and the specific
compiler backend. This will be pretty hard to pin down.

We can start with sanity checking whether int is 32 bits or 64 bits on this
platform. You should also try different optimization levels and see whether
this makes a difference. It would help a lot if you can trim the failing
example to a small snippet; we can examine the generated assembly (g++ -S)
then and see the atomic_* portions in context (they are usually marked with
#APP in the .s file.)

One shot in the dark could be to change

    __asm__ __volatile__
    (
        "lock\n\t"
        "xadd %1, %0":
        "=m"( *pw ), "=r"( r ): // outputs (%0, %1)
        "m"( *pw ), "1"( dv ): // inputs (%2, %3 == %1)
        "memory", "cc" // clobbers
    );

to

    __asm__ __volatile__
    (
        "lock\n\t"
        "xadd %1, %0":
        "+m"( *pw ), "=r"( r ): // outputs (%0, %1)
        "1"( dv ): // inputs (%2 == %1)
        "memory", "cc" // clobbers
    );

and similarly

    __asm__
    (
        "lock\n\t"
        "incl %0":
        "=m"( *pw ): // output (%0)
        "m"( *pw ): // input (%1)
        "cc" // clobbers
    );

to

    __asm__
    (
        "lock\n\t"
        "incl %0":
        "+m"( *pw ): // output (%0)
        : // inputs
        "cc" // clobbers
    );

(maybe starting from the latter.)


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net