Boost logo

Boost Users :

From: Terje Slettebų (terje.s_at_[hidden])
Date: 2002-05-24 21:00:43


>From: ravioli_at_[hidden]

There was feedback about this on the Boost list, as well, so I've posted a
reply about this, there, as well.

>>However, this does mean that the semantics is changed slightly. In the
cases
>>where an implicit conversion exists, it may give a different result than
if
>>not, in the case of char/wchar_t. Without implict conversion, you get 1 ->
>>'1'. With implicit conversion, you get 1 -> 1 (ASCII 1). As far as I can
>>tell, this only affects conversions between char/wchar_t and other types,
>>though. If this is a problem, please let me know, and I can change it to
>>make an exception for char/wchar_t.

>Is this behaviour overridable, for example by adding a specialization
transforming
>1 => '1' ?

Absolutely. That's what I meant by making an exception. This version of
lexical_cast relies heavily on specialisation, and partial specialisation
where available, although where not available, specialisations for the
common cases such as char/wchar_t and std::string/std::wstring, is included.
I also intend to make a version where partial specialisation isn't required.

Considering this, it does indeed seem like a reasonable conversion, for
something called lexical_cast. After all, this is how numbers are converted
to strings, so it makes sense that the same happens for characters.

Therefore, I've changed this so that it performs the above conversion
from/to
char/wchar_t, to make it consistent with the conversion from/to
std::basic_string.

Feedback at the Boost list also suggested what you suggested, here.

> Do you know about any such fast functions or operators (besides the
> conversion operators mentioned earlier here) for some types?

>It seems these good old clib functions are pretty fast for conversions.
>Sorry if they sadly look old-fashioned :
>atoi(), atol(), strtol(), strtod(), sprintf(), sscanf()...

That's no problem in general. Hidden in a library implementation, they can
be as cryptic as they want. :)

>You may laugh at me, but they are, afaik, really the fastest ones :)
>depending on the platform lib , of course

:)

One reason that I hesitate with this, however, is what I've mentioned
earlier, regarding being able to customise the formatting, by configuring
the stringstream object. This won't be possible if you use such C functions,
because they won't follow the stream state, including locale settings.

Especially as it's now possible to configure the stringstream interpreter,
and I'm also working on a new version where you can supply the stringstream
object as an optional argument to lexical_cast, I don't think it's a good
idea to use the C functions above. They will not follow the stream
formatting.

Is this reasonable for you?

>Maybe, for time types on Unix, asctime(const struct tm *) and strftime()
if properly wrapped,
>all built-ins functions involving complex<> types and their conversion to
and from doubles, ints, and so on.

These types, like a Roman numbers class you mention below, here, can be made
without any extra support from lexical_cast. You just provide the required
stream operators, and any constructors or conversion operators. Therefore, I
don't think these are the responsibility of lexical_cast. In the grand C++
tradition, lexical_cast is designed to be extensible, so that it can handle
any such new types.

>PS : Have you considered conversion to and from Roman numbers ;) ;) ?

Well, lexical_cast is about conversions between _types_. So if you want to
convert to and from Roman numbers, make a Roman numbers class. :)

If you design it the right way, i.e. including stream operators for
reading/writing Roman numbers, and any implicit conversions you'd like to
have (for example from/to int), then it should work with lexical_cast. :)

As you know, lexical_cast now supports implicit conversion, where available.

Thanks for the feedback. :)

Another thing I'm wondering about, using implicit conversion, where
available (except for the special cases, like conversion from/to
char/wchar_t, etc., as mentioned), it means that the following works:

int i=lexical_cast<int>(1.23); // double to int

However, using the original version of lexical_cast, the above would throw a
bad_lexical_cast. That's because the function is defined like this:

    template<typename Target, typename Source>
    Target lexical_cast(Source arg)
    {
# ifdef BOOST_LEXICAL_CAST_USE_STRSTREAM
        std::strstream interpreter; // for out-of-the-box g++ 2.95.2
# else
        std::stringstream interpreter;
# endif
        Target result;

        if(!(interpreter << arg) || !(interpreter >> result) ||
           !(interpreter >> std::ws).eof())
            throw bad_lexical_cast();

        return result;
    }

Notice the "!(interpreter >> std::ws).eof()" part. This means that when it
tries to convert from double to int, it writes "1.23" to the stringstream,
and then reads this back as an integer, which means that it stops at the
".". This means that the above throws an exception.

However, like I said, using implicit conversion, one overcomes this problem,
and it's able to do the following:

int i=lexical_cast<int>(1.23); // i=1

What I'm wondering is, is it ok to perform the conversion as in the latter
case (implicit conversion)? I would think it would be ok, and make the
conversion more flexible, as it takes into account implicit conversions, but
as it's a change in the semantics from the original lexical_cast, I'd like
to get feedback on this.

Regards,

Terje


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net