
If it 2byte why are you using boost::char_separator<char>? Shouldn't you be using boost::char_separator<wchar_t>? Gennadiy.
-----Original Message----- From: boost-users-bounces@lists.boost.org [mailto:boost-users-bounces@lists.boost.org] On Behalf Of Lee, Joo-Young Sent: Saturday, September 18, 2004 12:39 AM To: boost-users@lists.boost.org Subject: [Boost-users] Can boost::tokenizer tokenize 2byte character string?
Hi.
I try to use 'boost::tokenizer<boost::char_separator<char> >' to separate 2byte character string like Korean, Japanese, or Chinese.
But, I found that it does not works correctly.
Is there a solution?
Thanks for the help,
Lee Joo-Young
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users