From: Andreas Huber (ahd6974-spamgroupstrap_at_[hidden])
Date: 2005-03-21 19:15:57
Jonathan Turkanis wrote:
>>> It's very tempting to try to design an FSM library with better
>>> performance characteristics, but I don't have anywhere near enough
>>> time, esp. because I'd have to start out by becoming an FSM expert.
>>> I don't think I should have to do this before writing a review.
>> Certainly not. However, I do expect that people questioning the
>> performance (and along with it the implementation) are able to ask
>> concrete questions why some things are the way they are. You
>> partially did that in your previous posts but only partially.
>> I think it is
>> difficult to respond to statements like:
>> Possibly the joints between parts of machines defined in separate TUs
>> must be weaker (i.e., less known at compile time) than the
>> connections between collections of states implemented in a single TU.
> Why should it be easy to respond?
I'm not saying responding should be easy. My point was that such
statements are not likely to bring up useful answers.
> What I found shocking in your documentation was the statement that
> hand-written machines can be much faster than framework-generated
One could probably produce an equally shocking case for just about any
non-trivial library out there. My "mistake" was to measure it and
publish the numbers.
> To accept a library with such poor performance (even if it
> is adequate in many cases) requires a very convincing justification,
> which I believe is obviously absent from the documentation.
That became sufficiently apparent during the review. My hope was that
people would understand that the provided functionality must have a
significant overhead. I was obviously wrong and there will be better
documentation in this direction.
> As a
> reviewer, I could have done my job simply by pointing to the absence
> of justification, and left it at that.
Fortunately, there are many other things that matter in a review.
> (This is similar to a motion
> for summary judgment by the defense in an American trial: if the
> plaintiff hasn't presented evidence, you just point out that fact and
> you're done.)
This seems to imply that in a review the library author is the plaintiff
and the reviewers are the defendants. I don't think the roles in a
review regarding evidence are usually as clear-cut as they are in a
> All my other vague suggestions were attempts to suggest either (i)
> how the library could have been redesigned to yield better
> performance or (ii) how you might have documented the impossibility
> of achieving better performance given your requirements.
This is exactly something that I don't think I will ever be able to
prove. I can only provide evidence that hints in this direction. New C++
techniques and idioms are discovered almost on a monthly basis and
there's always the possibility that such a technique enables a much
better implementation (e.g. I wouldn't have imagined that it is possible
to enumerate overloads like Alexander does or I couldn't possibly have
conceived the technique that makes possible the FOREACH macro). I think
it is close to impossible for an individual to be proficient in all the
currently known techniques, not to mention the ones that have yet to be
> I'm aware
> they were not fully fleshed-out proposals. I was hoping you would try
> to meet me half-way, by trying to figure out what I was getting at
> and explaing why it wouldn't work.
... which I did in the cases where I could imagine candidate solutions.
I the cases where I couldn't see one I said so. Since the latter usually
didn't lead to a follow-up on your part I assumed that you don't have a
solution either. There's no meeting half-way in such a situation.
> At the very least, you could have
> asked me what I meant.
AFAICT, that was usually not the problem. I *think* I mostly understood
what you meant, see above.
>> As I said before, I think it is close to impossible to prove that a
>> certain design is the best performance-wise. Also, I don't claim that
>> there isn't any room in the current implementation for improvements,
>> rather I think it has reached a stage where substantial (>= 2x
>> faster) optimizations are non-trivial.
>> I can only describe why common optimizations don't work or why they
>> would violate one ore more of the requirements.
> All I wanted is a succint description of the various known approaches
> to implementing FSMs -- including existing frameworks and stuff you
> tried in the early stages of developing your framework -- together
> with an explanation of the performance tradeoffs.
I'll add a few links with a discussion of the performance-tradeoffs.
>> Things that I
>> *consciously* avoided or unsuccessfully tried are described in the
> If all the things you unsuccessfully tried are described in the
> rationale, then you didn't try very much. I was giving you the
> beneift of the doubt by assuming you had omitted describing your
> alternate designs.
What else should I have tried?
>> Moreover, during any programmer career a bag of knowledge
>> is acquired that is unconsciously applied to every programming job.
>> Naturally, this bag of knowledge is not the same for everyone
> This is obviously true.
>> which is
>> probably the main reason why the rationale seems to be sufficient for
>> some people while others find it unsatisfying.
> This is just funny. People who find your rationale satisfying could
> be people who don't care about performance as long as it's good
> enough for their application or people who failed to consider the
> same design alternatives you failed to consider.
Which alternatives did I fail to consider?
>> Right. Here's a description of the relevant parts of the
> <snip detailed description>
> This sort of stuff should be in the docs.
It will be.
-- Andreas Huber When replying by private email, please remove the words spam and trap from the address shown in the header.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk