|
Boost : |
From: williamkempf_at_[hidden]
Date: 2001-03-05 10:36:23
--- In boost_at_y..., Daryle Walker <darylew_at_m...> wrote:
> I looked at the recently-uploaded command-line parsing class, and
at the
> CmdLine and Options command-line parsing classes others mentioned
on the
> list. All of them use a quasi-interator interface to look at the
arguments'
> structure. Why? This interface involves giving the parsing object
the
> arguments and then manually walking the parsed list. Why do that
when you
> could do it more automatically with a callback interface (kind-of
like SAX
> works with XML)? Maybe I'll write a demo class with this
philosophy.
Brad Appleton's CmdLine does not require iteration, though it does
require an iterator ;). You pass the iterator to CmdLine::parse and
it does the rest, iterating over all the arguments and setting the
values of your CmdArgs. From a usage stand point this is much easier
to use, but it restricts you some what in how parsing occurs. (BTW,
this is very close to SAX style parsing if you look more closely at
how things are implemented and how you extend the parser.)
As for the code I posted here... I specifically chose to use a "pull"
approach instead of a "push" approach for several reasons:
1) In general, pull architectures are easier to implement.
2) Pull architectures are relatively easy to use as-is, while push
architectures often require a lot of extra harness code to be written.
3) Pull architectures are generally more flexible than push
architectures in how they are used.
4) Pull architectures are trivial to use to implement various push
architectures, while the reverse often is not true.
3 & 4 are the crux of my design decision. A push architecture like
Brad Appleton's CmdLine can be trivial built from my pull
architecture, while full flexibility in parsing is retained for other
approaches.
Bill Kempf
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk