
20 Sep
2004
20 Sep
'04
10:55 a.m.
Joo-Young Lee <trowind@gmail.com> wrote:
I try to use 'boost::tokenizer<boost::char_separator<char> >' to separate 2byte character string like Korean, Japanese, or Chinese.
But, I found that it does not works correctly.
Is there a solution?
You should probably convert them to wide strings and use the wchar_t version instead. The standard C++ library and Boost libraries tend not to work well with multibyte encodings.