RE: [Boost-users] Can boost::tokenizer tokenize 2byte character string?

20 Sep
2004
20 Sep
'04
10:55 a.m.
Joo-Young Lee <trowind@gmail.com> wrote:
I try to use 'boost::tokenizer<boost::char_separator<char> >' to separate 2byte character string like Korean, Japanese, or Chinese.
But, I found that it does not works correctly.
Is there a solution?
You should probably convert them to wide strings and use the wchar_t version instead. The standard C++ library and Boost libraries tend not to work well with multibyte encodings.
7577
Age (days ago)
7577
Last active (days ago)
0 comments
1 participants
participants (1)
-
Ben Hutchings