|
Boost Users : |
From: Delfin Rojas (drojas_at_[hidden])
Date: 2004-07-01 15:01:04
It seems most people post here at night PST. I never thought my posting
would generate so many interesting discussions.
Vladimir, from your 4 options I agree #1 and #4 (or a combination of both)
would be good for most of the cases. However I still think a #define based
option would be the best. Let me explain myself:
I have been taking a look at the library code and certainly the only thing
that would need to change is to use a preprocessor define to turn on/off
wide character strings and everywhere in the code use TChar strings. When
the code is being compiled for POSIX systems this Unicode define should be
turned off. In the Windows specific code all the calls to the Windows API
would need to change from "FunctionCallA" to "FunctionCall" since internally
the Windows API also works with TChar.
The caller could also use the TChar idea to have its code talk to the
library seamlessly. String constants can also be expressed in TChars (_T("my
string") in Windows). Even in Windows 9X and Me where the Windows API is not
Unicode natively this approach will work if the Microsoft redistributable
DLL unicows.dll is placed in the directory where the application runs. This
Dll transforms all the wide string API calls to narrow strings and converts
the responses back to wide strings.
As far as a library that can be passed both single char and double char
strings it is also a possibility that would play along well with the
scenario I just described. The library can perform a string_cast<TChar>
always to make sure the string is converted to the string type being used by
the library. If the library is compiled to use wide strings internally then
string_cast<TChar> would convert char strings to wchar_t strings and wchar_t
strings would remain unchanged. The contrary occurs when Unicode define is
turned off. However, I feel this interface is not the best since it would
allow the caller to mix single char strings and double char strings and this
is not a good practice generally. Converting strings back and forth is not a
fast process and conversions may not always result in what you expect,
especially if you are a novice working with encodings.
Somebody mentioned Java doesn't have this problem. This is because all
strings in Java are UTF-16 (wchar_t) strings.
Let me know what you guys think of all this.
Thanks
-delfin
-----Original Message-----
From: boost-users-bounces_at_[hidden]
[mailto:boost-users-bounces_at_[hidden]] On Behalf Of David Abrahams
Sent: Thursday, July 01, 2004 9:46 AM
To: boost-users_at_[hidden]
Subject: [Boost-users] Re: Feature request for boost::filesystem
Vladimir Prus <ghost_at_[hidden]> writes:
> David Abrahams wrote:
>
>>> 1. Make the library interface templated.
>>> 2. Use narrow classes: e.g. string
>>> 3. Use wide classes: e.g. wstring
>>> 4. Have some class which works with ascii and unicode.
>>>
>>> The first approach is bad for code size reasons.
>>
>> It doesn't have to be. There can be a library object with explicit
>> instantiations of the wide and narrow classes.
>
> Which doubles the size of shared library itself.
It depends; the narrow specialization might be implemented in terms
of the wide one ;-)
-- Dave Abrahams Boost Consulting http://www.boost-consulting.com _______________________________________________ Boost-users mailing list Boost-users_at_[hidden] http://lists.boost.org/mailman/listinfo.cgi/boost-users
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net