Can boost::tokenizer tokenize 2byte character string?

17 Sep
2004
17 Sep
'04
10:39 p.m.
Hi. I try to use 'boost::tokenizer<boost::char_separator<char> >' to separate 2byte character string like Korean, Japanese, or Chinese. But, I found that it does not works correctly. Is there a solution? Thanks for the help, Lee Joo-Young
7579
Age (days ago)
7579
Last active (days ago)
0 comments
1 participants
participants (1)
-
Lee, Joo-Young