From: Terje Slettebø (tslettebo_at_[hidden])
Date: 2002-05-24 20:37:57
>From: "Mattias Flodin" <flodin_at_[hidden]>
>>On Thu, May 23, 2002 at 02:39:34PM +0200, Terje Slettebø wrote:
>> However, this does mean that the semantics is changed slightly. In the
>> where an implicit conversion exists, it may give a different result than
>> not, in the case of char/wchar_t. Without implict conversion, you get
>> '1'. With implicit conversion, you get 1 -> 1 (ASCII 1). As far as I can
>> tell, this only affects conversions between char/wchar_t and other types,
>> though. If this is a problem, please let me know, and I can change it to
>> make an exception for char/wchar_t.
>Are you saying that lexical_cast<char>(int(1)) would give '\01'? This
>seems to almost defeat the purpose of lexical_cast, if you ask me.
Yeah, I've come to this, too. A reply at the Boost User's list said the
Considering this, it does indeed seem like a reasonable conversion, for
something called lexical_cast. After all, this is how numbers are converted
to strings, so it makes sense that the same happens for characters.
Therefore, I've changed this so that it performs the usual conversion (1 <->
'1') from/to char/wchar_t, to make it consistent with the conversion from/to
>> - wlexical_cast - A version that can use wide characters
>> - This is already handled by the previous version.
>What I'd like though is for lexical_cast<wstring>(1) to work as
>expected - i.e. produce a string with a wide-character lexical
>representation of the number 1. Having to use a function with a
>different name makes it pretty much impossible to write a program that
>is transparent to the width of characters, without having to use
Absolutely. I meant that wide character support should be handled using
lexical_cast, i.e. no new name. As you say, being able to use the same name
is important for generic code.
This is already possible, using the version I uploaded. :)
At the moment, this requires partial specialisation, but I intend to make a
version that doesn't require that. However, a reasonably standards compliant
compiler should be able to handle the current version.
Thanks for the feedback. :)
Another thing I'm wondering about, using implicit conversion, where
available (except for the special cases, like conversion from/to
char/wchar_t, etc., as mentioned), it means that the following works:
int i=lexical_cast<int>(1.23); // double to int
However, using the original version of lexical_cast, the above would throw a
bad_lexical_cast. That's because the function is defined like this:
template<typename Target, typename Source>
Target lexical_cast(Source arg)
# ifdef BOOST_LEXICAL_CAST_USE_STRSTREAM
std::strstream interpreter; // for out-of-the-box g++ 2.95.2
if(!(interpreter << arg) || !(interpreter >> result) ||
!(interpreter >> std::ws).eof())
Notice the "!(interpreter >> std::ws).eof()" part. This means that when it
tries to convert from double to int, it writes "1.23" to the stringstream,
and then reads this back as an integer, which means that it stops at the
".". This means that the above throws an exception.
However, like I said, using implicit conversion, one overcomes this problem,
and it's able to do the following:
int i=lexical_cast<int>(1.23); // i=1
What I'm wondering is, is it ok to perform the conversion as in the latter
case (implicit conversion)? I would think it would be ok, and make the
conversion more flexible, as it takes into account implicit conversions, but
as it's a change in the semantics from the original lexical_cast, I'd like
to get feedback on this.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk