|
Boost : |
Subject: Re: [boost] Unicode: what kind of binary compatibility do we want?
From: Rogier van Dalen (rogiervd_at_[hidden])
Date: 2009-06-02 13:08:35
On Tue, Jun 2, 2009 at 04:09, Mathias Gaunard
<mathias.gaunard_at_[hidden]> wrote:
> The work from Graham Barnett back in 2005 defined an abstract base class
> with virtual functions for every unicode-related feature but I believe
> that's overkill.
>
> Basically, the current property design I have is like this
>
> struct some_property
> {
> Â Â enum type
> Â Â {
> Â Â Â Â some_default,
> Â Â Â Â some_value1,
> Â Â Â Â some_value2,
> Â Â Â Â ...
> Â Â Â Â _count;
> Â Â }
> };
>
> some_property::type get_some_property(char32 ch);
I don't remember Graham's rationale, but I can see two reasons why he
may have chosen that design.
(1) Looking at common query sequences, for example, in Unicode
normalisation, I think you'll extract a number of properties of one
code point after one another. That may have to be optimised.
(2) Some OSs contain Unicode databases; some standard libraries do;
and some people may use the private-use code points. Plugging in
different databases should probably be possible.
I haven't thought this through, so correct me if I'm wrong. I'm not
sure your current design works well for (1). I think (2) can be solved
differently than with virtual functions, but a sketch of how to
integrate this might put any doubts to rest.
Hope this helps.
Cheers,
Rogier
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk