My limited experience is that tokenizer is faster. I have tried it several times in different schemes but the tokenizer always seems to come out faster by more than a little. I would prefer the split() scheme but I haven't found the way to make it go faster.
 
Larry
----- Original Message -----
From: chun ping wang
Newsgroups: gmane.comp.lib.boost.user
To: boost-users@lists.boost.org
Sent: Wednesday, December 12, 2007 10:56 PM
Subject: [boost-users] tokenizer vs string algorithm split.

Hi I was wondering which one is better and faster to split a file of csv value of number and put it into container of double.
1.) Which option is better.
             // method 1.
             std::vector<std::string> split_string;
             boost::algorithm::trim(flist);
             boost::algorithm::split(split_string, flist, boost::algorithm::is_any_of(","));
             std::vector<double> elements;
             BOOST_FOREACH(std::string s, split_string)
             {
                 elements += boost::lexical_cast<double>(s);
              }

              // method 2.
               boost::char_separator<char> sep(",");
               boost::tokenizer<boost::char_separator<char> > tokens(flist, sep);
               std::vector<double> elements;
               BOOST_FOREACH(std::string token, tokens)
               {
                   elements += boost::lexical_cast<double>(token);
               }

2.) When is it better to use string algorithm split instead of tokenizer and vice versa.


_______________________________________________
Boost-users mailing list
Boost-users@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-users