|
Boost-Commit : |
Subject: [Boost-commit] svn:boost r53830 - in trunk/libs/spirit: doc/lex example/lex
From: hartmut.kaiser_at_[hidden]
Date: 2009-06-12 15:22:40
Author: hkaiser
Date: 2009-06-12 15:22:40 EDT (Fri, 12 Jun 2009)
New Revision: 53830
URL: http://svn.boost.org/trac/boost/changeset/53830
Log:
Spirit: documentation
Text files modified:
trunk/libs/spirit/doc/lex/tokenizing.qbk | 34 +++++++++++++++++++++++++++++++---
trunk/libs/spirit/example/lex/word_count_functor.cpp | 4 ++--
2 files changed, 33 insertions(+), 5 deletions(-)
Modified: trunk/libs/spirit/doc/lex/tokenizing.qbk
==============================================================================
--- trunk/libs/spirit/doc/lex/tokenizing.qbk (original)
+++ trunk/libs/spirit/doc/lex/tokenizing.qbk 2009-06-12 15:22:40 EDT (Fri, 12 Jun 2009)
@@ -58,10 +58,38 @@
the initial lexer state fo rthe tokenization.]]
]
-A second overload of the `tokenize()` function allows to specify ana arbitrary
-function of function object to be called for each of the generated tokens:
+A second overload of the `tokenize()` function allows to specify any arbitrary
+function or function object to be called for each of the generated tokens. For
+some applications this is very useful, as it might avoid having lexer semantic
+actions. For an example of how to use this function, please have a look at
+[@../../example/lex/word_count_lexer.cpp word_count_functor.cpp]:
-
+[wcf_main]
+
+Here is the prototype of this `tokenize()` function overload:
+
+ template <typename Iterator, typename Lexer, typename F>
+ bool tokenize(Iterator& first, Iterator last, Lexer const& lex, F f
+ , typename Lexer::char_type const* initial_state = 0);
+
+[variablelist where:
+ [[Iterator& first] [The begin of the input sequence to tokenize. The
+ value of this iterator will be updated by the
+ lexer, pointing to the first not matched
+ character of the input after the function
+ returns.]]
+ [[Iterator last] [The end of the input sequence to tokenize.]]
+ [[Lexer const& lex] [The lexer instance to use for tokenization.]]
+ [[F f] [A function or function object to be called for
+ each matched token. This function is expected to
+ have the prototype: `bool f(Lexer::token_type);`
+ and should return `false` if the supplied token
+ instance is invald (the `tokenize()` function will
+ return immediatly).]]
+ [[Lexer::char_type const* initial_state]
+ [This optional parameter can be used to specify
+ the initial lexer state fo rthe tokenization.]]
+]
[heading The generate_static function]
Modified: trunk/libs/spirit/example/lex/word_count_functor.cpp
==============================================================================
--- trunk/libs/spirit/example/lex/word_count_functor.cpp (original)
+++ trunk/libs/spirit/example/lex/word_count_functor.cpp 2009-06-12 15:22:40 EDT (Fri, 12 Jun 2009)
@@ -145,8 +145,8 @@
/*` The main function simply loads the given file into memory (as a
`std::string`), instantiates an instance of the token definition template
using the correct iterator type (`word_count_tokens<char const*>`),
- and finally calls `lex::tokenize`, passing an instance of the counter functor
- defined above. The return value of `lex::tokenize` will be `true` if the
+ and finally calls `lex::tokenize`, passing an instance of the counter function
+ object. The return value of `lex::tokenize()` will be `true` if the
whole input sequence has been successfully tokenized, and `false` otherwise.
*/
int main(int argc, char* argv[])
Boost-Commit list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk