From: Rogier van Dalen (rogiervd_at_[hidden])
Date: 2004-10-21 10:04:22
On Wed, 20 Oct 2004 20:18:27 +0200, Erik Wien <wien_at_[hidden]> wrote:
> "Rogier van Dalen" <rogiervd_at_[hidden]> wrote in message
> > Comparing any Unicode data in different or unknown normalisation forms
> > will therefore by definition be slow.
> True.. So what we basically need to determine, is what is most critical?
> Fast comparing of strings (Strings always represented in a given NF), or
> fast genereal string handling (NF determined when needed)
I'm not quite sure what you mean. Do you propose to check whether a
string is valid when reading it? And do you propose to make sure it is
in some normalization form? Or will you leave it in any form it is in?
What use cases do you envision where only "string handling" that does
not need normalization is used? I have not been able to think of any.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk