From: Jeff Flinn (TriumphSprint2000_at_[hidden])
Date: 2004-10-16 10:00:49
"Dirk Gregorius" <dirk_at_[hidden]> wrote in message
> I like to break a file into tokens for processing. The file contains
> comments which are introduced by "//", "#" and ";". Can I setup the
> tokenizer directly such that the comments are skipped? If no, what would
> suggest to erase the comments from my string before processing?
Since no one else has suggested these:
IMO, this sounds more like an application for spirit or regex. In spirit you
would do something approximating:
// note this is untested but gives an idea of the
// facilities available.
spirit::rule<> rSkip = +space_p
| lexeme_d[ comment_p("//")
spirit::rule<> rToken = (*anychar_P)[push_back_a(tokens); //
parse_info<> lResults = parse( first, last, *rToken , rSkip );
Certainly I think this is worth a look on your part.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk