|
Boost : |
Subject: Re: [boost] [review] string convert
From: Matt Chambers (matt.chambers42_at_[hidden])
Date: 2011-05-06 01:03:19
On 5/5/2011 6:58 PM, Vladimir Batov wrote:
>> Matthew Chambers<matt.chambers42<at> gmail.com> writes:
>>
>> Did you ever use boost.convert on a non-defaultable type? I've never needed
>> that.
> [...snip...]
> An enumerator as simple as
>
> enum hamlet_problem { to_be, not_to_be };
>
> has no default. I am not sure if extending the above to
>
> enum hamlet_problem { to_be, not_to_be, i_dont_know };
>
> will be helpful for poor Hamlet. From the design perspectives adding elements
> not relevant to the domain (i_dont_know) is pollution.
I agree that in some cases there is no sensible default/fallback value. In those
cases, the only concern is whether the conversion succeeded, right?
So optional<T> will work fine for that case, but it would have to be renamed so
it doesn't conflict with the simple, throwing convert_cast<T>(s):
optional<hamlet_problem> t = optional_convert_to<hamlet_problem>(s);
This can be in addition to try_convert_to, which would still be useful when both
the conversion success and the final value (fallback or not) are wanted.
In an earlier message you wrote this about the try_convert_to method:
> #5 IMO can. It deploys the Pascal-style parameter passing and modifications. I
> remember reading Stroustrup (I think) long time ago advising against passing
> non-const references and I personally agree. That's due to potential confusion
> and wrong expectations. I am not aware of any function in std and boost doing
> that. Introducing such a precedent might be a hard-sell.
If you want a precedent for taking output as a by-reference parameter:
boost::algorithm::split. Further, I'm not sure how output references could be
considered more confusing than the output iterators that are ubiquitous in std
algorithms.
Outside the enum case, have you considered that your use of convert on so many
object types mixes lexical conversion and serialization (admittedly related
concepts)?
In my experience, the former uses a simple, concise interface intended for value
types while the latter uses a verbose interface supporting both object types and
value types.
What's the rationale for a lexical conversion library to support serialization?
-Matt
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk