Boost logo

Boost :

From: Dave Gomboc (dave_at_[hidden])
Date: 2005-03-10 11:41:58

> > Let us consider that we have multiple fsms and we would like to
> > apply operations to these fsms, e.g. direct concatenation (the
> > terminal state of one automata becomes the start state of
> > or choice (depending on an action of, or an accept/reject by fsm
> > proceed to execute fsm b or fsm c). This is apparently
> > straightforwared when working at the level of states; however,
> > library draws a distinction between a state_machine and a state.
> > This acts as a barrier to the direct composition of state
> > that we would like to use as components within large state
> A much better way to achieve that is with templated states,
> as you seem to
> have observed yourself.
> > The distinction made between state and state_machine, which I
> > consider to be unfortunate, was not justified in the rationale.
> I didn't justify it because it feels natural to me that a
> state machine has
> only one event queue, and one place where history is stored (both
> containers inside state_machine). Admittedly, because the
> outermost state
> knows at compile time that it is the outermost state, the
> state_machine class
> template could be "abstracted away". You'd then only have
> simple_state and
> state. However, this seems at best obscure to me.

class state_machine: public state, private state_control { ... };

is a possibility. However, leaving the control manager separate
from the state data (structure) might be better yet. Why does this
separation of concerns seem obscure? Perhaps the control manager
can iterate over the state data structure? operator++ advances to
the next state; operator() performs the reaction, etc. Okay, I've
entered pure brainstorming territory :-), but really, things might
be improved by not commingling data and control.

> [snip]
> > There is a brief note in the tutorial re: submachines that fails
> > do this topic justice. If the template approach recommended
> > is to be the canonical form for a reusable component, that style
> > should be propagated through all but the simplest examples in
> > tutorial. Each submachine should clearly be labelled as part of
> > own .hpp file.
> While I might agree with your reasoning in theory, I don't
> think that it would
> work in practice. Even now I sometimes get complaints that
> the examples are
> too complex (too many templates) and changing all examples to
> work with
> templated states would make them even more so. Plus, often
> there is no need to
> use a state in more than one place, so why bother with
> templated states then?

I think problems with examples are alleviated partially when people
can take the source files comprising the example, compile them, and
see them run -- a success was achieved immediately. For instance,
Boost.Serialization does this with at least some of its examples
(demo_xml.hpp and demo_xml.cpp, if I recall correctly).

It is not presumptuous for a coder to assume that their machine will
never be needed in another context? If templating improves
reusability considerably, while not costing much, then it should be
done. However, if there are other obstacles to component reuse,
rather than say it's not worth it to template these, instead say
these other obstacles must be cleared.

One cannot build a library of fsm components that may be connected
together to build more complex fsms without this, but I continue to
think that this would be very useful.

> [snip]
> > In the speed vs. scalability trade-off section of the rationale
> > page, it is claimed that using an fsm for a parsing task is an
> > abuse
> In its generality this remark is of course false and I should
> have replaced it
> with better reasoning long ago. What I want to say here is
> that it is a bad
> idea to use this library for parsing, as it is so general
> that performance
> tradeoffs are inevitable. For parsing, one is much better off
> with a parser
> framework, which is optimized for the task. The IMO important
> observation is
> the following: Parsers like Spirit only need to return when
> they have finished
> processing all "events" (characters).
> An FSM however must
> return after
> processing each event. Parsers can exploit this fact and
> store state with
> recursive calls, which is of course much faster (some stuff
> can even be
> inlined) than any super-clever double-dispatch scheme.
> There is often a hard limit for stack space which - I think -
> is the reason
> why FSMs do make sense in parsers nevertheless. However, all
> the FSM-employing
> parsers I know are of the dynamic variant (e.g. regex) and
> these FSMs are not
> directly designed by humans but computed by an algorithm.

For reading a identifier character-by-character, sure, one would not
want to use an fsm to process each character individually. However,
one can design the fsm to proceed to read an entire identifier at
once. In situations where a parser might require extensive
backtracking (and Spirit-1 doesn't actually backtrack after accepted
symbols, which is something I could employ an fsm to work around!)
the use of an non-deterministic FA may be well-justified.

Also, if one is merely testing for the presence of a regular
expression match, I agree that just signalling once when the match
is complete is sufficient, however, often one wants substrings
within a match that correspond to certain portions of the match, in
which case actions at transition points may be a fine solution to
the problem, both reasonably efficient and easily understood by the

Furthermore, the possibilities for error comprehension and recovery
in a parsing process may be improved by the use of an fsm framework.

It is certainly true that I have use for dynamically-arranged fsms
-- and not only for parsing. It was a disappointment for me that I
would not be able to do some tasks with this library.

> Last but not least I
> think it would be tedious for a human to implement a parser
> on the state-
> level, which is probably the reason why I dared to make the
> admittedly dumb
> remark in the first place.
> I think Darryl puts this rather nicely:
> <quote>
> Fast flat fsms to process simple events and having little
> additional context
> are best implemented using other methods. However such
> implementations are not
> difficult, and there are plenty of domain specific tools
> already (eg various
> language recognisers).
> </quote>

This comment does not address myriad uses that I would have for an
fsm library that met my needs. Don't forget that with the addition
of storage, it's actaully fully Turing-complete, "fsm"

> > However, finite-state automata are inextricably linked with the
> > acceptance of regular languages: see, for instance, section 12.2
> > Keller (again,
> >; it is not
> > much to ask that an fsm library to be able to serve as the
> > underpinning of a reasonably efficient regex library.
> See above. A regex library needs a dynamic FSM.

Indeed, I had been hoping that the fsm library would accommodate
such needs.
I do, however, understand that one can't implement everything at
> [snip]
> > Other reviewers have also already commented upon the lack of a
> > table-lookup dispatch method. From the rationale:
> > "To warrant fast table lookup, states and events must be
> > represented with an integer. To keep the table as small as
> > possible, the numbering should be continuous, e.g. if there are
> > states, it's best to use the ids 0-9. To ensure continuity of
> > all states are best defined in the same header file. The same
> > applies to events. Again, this does not scale."
> >
> > However, I think that for table lookup there is no requirement
> > ids be consecutive -- a compile-time write-once read-many hash
> > could do the trick.
> Unless you use a perfect hashing algorithm (which is
> infeasible because you
> need to know all the ids beforehand), this would essentially
> wreck the ability
> to use this library in real-time systems. Hashed containers
> are a bad idea for
> real-time as there is no upper limit for a search operation.

However, one would know the maximum dispatch time guarantee by the
end of the compilation process, and it likely would be less
time-consuming (where sufficient memory exists) than the linear
search currently employed.

> > While perhaps each state (compilation module) must know of its
> > successor states, no central table of "current state x action x
> > state" encompassing the entire machine need be explicitly
> Which is exactly what this library is doing. There is a
> lookup "table" within
> each state and if that fails lookup considers the states
> outer state. However,
> this inevitably means that you cannot have constant time
> lookup. The larger
> your FSM becomes the longer lookup takes. BTW, as I've
> written in another post
> I don't see why reaction search is so important for people,
> because in typical
> FSMs I'd expect transition execution to be much more of a
> performance hog.

Perhaps it is because people would like the freedom to employ fsm in
contexts that you did not target your solution for. As I'm sure you
know, an fsm can be a very handy thing!

> > Under the "Limitations" section, reference is made to "lost
> > "It is currently not possible to specially handle events
> > have not triggered a recation (and would thus be silently
> > discarded). However, a facility allowing this will probably be
> > added in the not-too-distant future."
> > If the event is irrelevant at that point in the state machine,
it is
> > most properly ignored. If it is not irrelevant, there ought to
> > a transition (perhaps a self-transition). Therefore, I don't
> > understand the comment.
> There are applications of state-machines where an ignored
> event consitutes an
> error. Since the library currently does not allow to define
> transitions that
> are triggered by any event there is no easy way to handle
> such "ignored"
> events.

In that case, it seems quite reasonable to add one as you planned.

> > I have an uneasy feeling about how differently asynchronous and
> > synchronous machines are operated, but did not consider this
> > in detail.
> Why? They are rather different things aren't they?

No, not really. Certainly, I would expect to be able to plug
together an asynchronous and synchronous submachines into a larger,
single fsm (which would need to be modelled asynchronously).
> > Allowing the tracking of history is intruiging; the prohibition
> > against deep history of states with orthogonal regions is
> > suspicious.
> Why?

Why not?

> > The explanation given is that it would be more
> > complicated (but still possible) to implement.
> It would be much more complicated to be precise.
> > In my experience,
> > that sort of reasoning just doesn't fly at Boost.
> Why? Must I really offer all the functionality in the first
> version of the
> library? Very few people use deep history and orthogonal states in

> conjunction. For those that do I describe a work-around. I
> have not received a
> single request to implement full deep history.

Well, you've received at least one, but your response was 'why?'.
> > The discussion of fork and join bars, combined with the event
> > dispatch to orthogonal regions discussion immediately below
> > suggests that the implementation is not truly designed to handle
> > concurrent states.
> Why?

Otherwise, these items would be addressed, that's why. But see

> > I think, however, if one is to claim that UML
> > statecharts are supported that the semantics of execution ought
> > be the same! I can well imagine that this "would complicate the
> > implementation quite a bit", but Boost is also about interface,
> > only implementation, and if the wrong interface is standardized
> > (whether de jure or de facto) it's worse for everybody in the
> The interface change for this is rather easy: Simply specify
> multiple rather
> than just one target state. I don't see the problem.
> BTW, this is connected to deep history not working for
> orthogonal sub-states.

I suspected so. I believe it was caused by starting with the fsm
having abilities that applied to one state at a time, then later
adding orthogonality support but without a full conversion to
supporting state-supersets in all places of the code. However, this
is sheer speculation on my part and may well be wrong. :-)

> > There is no explicit discussion of non-deterministic fsms. Of
> > course, every such fsm can be re-expressed deterministically.
> I don't see why I should discuss that.

Because people may want the ability to use epsilon-transitions,
which NFAs support and make finite automata modelling considerably
simpler in many cases.

> > Similarly, a given fsm may be non-minimal, but such minimization
> > procedures exist. Both of these procedures necessarily require
> > global view of the fsm. As this is contrary to the scalability
> > requirement listed in the rationale, I conclude that neither are
> > present.
> This has never been a goal and I don't see why I should provide
> functionality. Also I suspect that state-local storage could
> prevent such a
> minimization.
> > However, I would like to ask if (excepting possible
> > compiler limitations) is there anything about the library that
> > precludes extending it such that NDFAs could be specified in
> > and converted into the minimized DFA during compilation?
> Yes, the scalability requirements. As I explain above (and
> you seem to have
> found that yourself) there is no way to have a global view at an
> implemented with this library.

I am not suggesting that you provide this functionality yourself.
What I am asking is that if another needs this functionality can
they add code to your library to do it without too much trouble, or
are there implementation considerations that make it impossible? It
should be clear that if one intends to perform a NFA->DFA or DFA
minimization that it is recognized that they are forfeiting
scalability by requiring whole-machine view.

> > We had tuple and mpl separately before fusion; it seems
> > for boost to have both compile- and run-time fsm libraries, if
> > ultimately they may be similarly fused.
> I don't think that is feasible unless you limit yourself to
> FSMs specified in
> single TUs so that a global view is possible.

Hmm. :-/

> > I like this library, but I'm not satisfied with it. What level
> > library is good enough for boost? The library is not ready for
> > standardization, however, this doesn't appear to be the standard
> > that most people use when deciding upon their vote. This
library is
> > certainly good enough for people to make successful use of.
> > The "default" black disc in the diagrams is too large! Maybe
> > the radius about 2/3s of what it is now.
> I think the size is in line with the current UML standard,
> but I'll check again.

Sorry, I didn't realize that UML had picked a standard size. In
Harel's paper, the default black discs were even smaller than what I
had suggested.


Boost list run by bdawes at, gregod at, cpdaniel at, john at