From: Robert Zeh (razeh_at_[hidden])
Date: 2004-04-07 08:08:49
David Abrahams <dave_at_[hidden]> writes:
> "John R. Bandela" <jbandela_at_[hidden]> writes:
> > Thanks for the e-mail.
> > The problem the char_separators not giving you the correct words and
> > counts is a bug. It was caused by the latest changes to
> > token_functions to speed up tokenizing non-input iterators. The
> > version in the boost 1.31 should work (it is the one prior to the
> > changes). I have also just fixed it in the CVS.
> Thanks! All the docs still show char_delimiters_separator as the
> default, though.
Should we consider creating some regression tests for the tokenizer?
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk