|
Boost : |
Subject: Re: [boost] [atomic] comments
From: Helge Bahmann (hcb_at_[hidden])
Date: 2011-11-01 08:37:54
On Tuesday 01 November 2011 12:59:09 Tim Blechmann wrote:
> > > > > Lots of reasons. I may not have access to all platforms, to
> > > > > begin with. I
> > > > > may not have enough knowledge about hardware capabilities of all
> > > > > of the platforms. Manual porting to multitude platforms may be
> > > > > expensive.> >
> > > >
> > > > This is ridiculous. May I invite you to have a look at socket
> > > > communication via boost.asio?
> > >
> > > socket communication and shared memory have quite different performance
> > > characteristics.
> >
> > is there some semantic ambiguity to the word "fallback" that escapes me?
> > or do you expect the "fallback" to have the same performance
> > characteristic as the "optimized" implementation, always? then please
> > explain to me how a fallback for atomic variables using locks is going to
> > preserve your expected performance characteristics
>
> imo, `fallback' would mean that i can still compile the program, without
> the need to provide a different implementation for the case that atomics
> are not lockfree/interprocess safe.
implementing the fallback for something as simple as a "queue" via some
socket-based IPC is an entry-level question in a programmer job interview, so
the effort required is rather trivial
> > > e.g. i would not trust on accessing sockets from a
> > > real-time thread.
> >
> > what makes you believe that message channels in real-time systems were
> > designed so dumb as to make them unusable for real-time purposes?
>
> life would be so much easier for me, if the users of my software would not
> run on off-the-shelf operating systems ;)
and what makes you believe that the performance characteristics of sockets in
off-the-shelf operating systems is unsuitable for real-time, while the
process scheduling characteristics of process scheduling in off-the-shelf
operating systems is suitable for real-time?
(yes I understand you want to maximise quality of service while accepting that
it is only probabilistically real-time, just pointing out that a) you should
back-up such statements with measurements and b) forget about any hope that
your system is going to meet your requirements on any random platform you
have not available and tested on).
> > right, but the standard implementation for gcc does not use a spinlock
> > per object (see __atomic_flag_for_address) which turns all of this moot -
> > there is NO guarantee for std::atomic to be safe interprocess, period
>
> well, i'd say this is a problem of gcc's implementation of std::atomic.
> this doesn't justify that boost.atomic does not follow the suggestion of
> the standard.
the standard says "should" not "must" -- the gcc guys have not made this
decision without good reasons, and I agree with these reasons
note that there is also trouble lurking with run-time selection of whether
something like atomic<uint64_t> is atomic via cmpxchg8b: do you really want
to make this at minimum 12 bytes in size (though effectively occupying 16 due
to alignment) just to save the room for the rarely-if-ever used per-object
spinlock? same with atomic<128> and cmpxchg16b ?
Similar problem on sparc -- sparcv8 must emulate everything via spinlock, on
sparcv9 everything is lockfree, ideally should be decided at runtime.
And BTW who says that a spinlock is "good enough"? Are you certain that a
priority-inheriting mutex would not be required in some use cases?
I would like to repeat the questions that nobody has answered so far:
1. What ratio of inter-thread/inter-process use of atomic variables do you
expect?
2. What is wrong with implementing boost::interprocess::atomic<T> that
specializes to boost::atomic<T> when possible and uses an interprocess lock
otherwise?
3. What is wrong with implementing "my_maybe_lockfree_queue" that specializes
to an implementation using boost::atomic<T> when possible and uses classical
IPC otherwise? Since the mechanisms required to establish shared memory
segments between processes is already platform-specific, why can you not make
the distinction between different IPC mechanisms there?
I am very much in favor of covering as many use cases as possible, and if the
extension comes at no cost I will implement it right away -- but I do not see
any good reason to penalize "well-designed" software with additional overhead
to cater for the requirements of what I regard as a high-level design
problem.
Best regards
Helge
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk