From: Alexander Terekhov (TEREKHOV_at_[hidden])
Date: 2002-08-16 10:55:45
Peter Dimov wrote:
> > > > Even if some throw() operation would suddenly throw?
> > >
> > > What language are you talking about here?
> > Here, I'm talking about boostified(*) ``C++ language'' and boostified
> > libraries full of "operation() // throw()", "operation() // never
> These operations cannot "suddenly throw". If you assume that they can
> "suddenly throw", you must assume that the rest of the documentation is
> wrong, too. In other words, you must assume that the functions don't
Yep, I DO assume that sometimes "functions don't work"... due to bugs,
corrupted program state, whatever resulting in throwing somewhat funny
exceptions along the lines of std::logic_error, std::invalid_argument,
etc. as a product of some unexpected internal failures. Yeah, I know,
folks who publish code like this
pthread_mutex_t & m_;
scoped_lock(scoped_lock const &);
scoped_lock & operator=(scoped_lock const &);
scoped_lock(lightweight_mutex & m): m_(m.m_)
[NOT coding any "old fashioned" checks for errors], will probably
disagree and simply say to me: THAT'S UNDEFINED BEHAVIOR, STUPID.
That's OK. That's totally OK.
> > and etc. "throw()" functions WITHOUT proper exception specifications.
> Define "proper" exception specifications. I've never seen one.
Uhmm. Not sure what you mean. Well, to me, "proper" exception
specifications would work like what we have currently, but would
PROHIBIT unwinding on violations [get rid of silly catch(...)-
in-function-try-block-"handler" semantics]; with unexpected()
invoked at throw point. That would require TWO PHASE processing
["modern" stuff] and would add to the overhead associated with
archaic setjmp/longjmp implementations, but who cares? ;-)
-- "However... I still think it's a mistake to continue "business as usual" in the process after any thread dies with an unhandled exception. Something is seriously wrong, and nobody knows what it is. (Or it'd have been handled.) This is not a recipe for reliable operation... or even for useful/safe cleanup. Kill the process with a core file and sort it out later. If an application really can't afford a shutdown, then the whole thing should be running in a captive subprocess in the first place. The parent, safely isolated from the suspect address space, (and other resources), can fork/exec a new copy when one child dies unexpectedly. Sure, that's "inconvenient"; but reliability (like performance) often is. Pretending to handle cleanup in an unknown environment isn't reliable. You've just painted over the cracks so you can stand back and sigh with satisfaction over a job well done... until you actually try to walk into the room, and fall through." < From: David Butenhof <David.Butenhof_at_[hidden]> Subject: Re: High level thread design question Newsgroups: comp.programming.threads >
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk