|
Boost : |
Subject: Re: [boost] [1.44] Beta progress?
From: Robert Ramey (ramey_at_[hidden])
Date: 2010-07-28 13:50:27
David Abrahams wrote:
> At Tue, 27 Jul 2010 15:31:50 -0400, David Abrahams wrote, and Robert
> Ramey snipped:
>
>>> And what is your definition of Serializable (precisely, please)?
>
> So could you please answer that question?
>
> At Tue, 27 Jul 2010 13:02:29 -0800,
> Robert Ramey wrote:
>>
>> David Abrahams wrote:
>>> At Tue, 27 Jul 2010 11:58:19 -0800,
>>> try this example, and see how well your library deals with it.
>>>
>>> struct X
>>> {
>>> operator short() const { return 0; }
>>> operator short&() const { return 0; }
>>>
>>> operator long() const { return 0; }
>>> operator long&() const { return 0; }
>>> };
>>>
>>> in concept requirements the use of convertibility almost always
>>> causes problems.
>>
>> As written this would work fine. Since it is not a primitive, the
>> default serialization would be to insist upon the existence of a
>> serialize function.
> Then it wouldn't work fine. It's neither a primitive nor does it have
> a serialize function. You wrote:
>
Note that it could have a non-intrusive serialize function.
So I guess it would be correct to say that whether the above
is serializable would depend upon other information not present
in the above example.
I don't see anyway to verify this via concepts.
> I believe that any type implicitly convertible to a c++ primitive
> type (type and reference) is a serializable type.
> and X contradicts that. We can go around and around on this until
> your definition of Serializable is solid, and I'm even willing to do
> so if that's what it takes to help you get this right.
>> Actually, the convertability isn't stated in the documenation or
>> concept. It's just what when I made the archive models,
>> convertibility reduced/eliminated most of the code. I just plowed
>> on and
>> finished the job. So I suppose the concept as stated isn't
>> accurate.
>
> Doesn't surprise me.
>
The current documentation doesn't say anything about convertability.
I just happened to be true for the internal types used by
the library. It is only this which raises the question as to whether
the concept as stated need be changed. One could well
leave the concept as it is, and note that the archive implementations
have this feature for the particular types used internally. At
that point it would become an implemenation detail relevant
only for those who leverage on the current implementations.
So one would say that the current archives can also handle
some types which are not defined as serializable even though
this is not guarenteed by the concepts.
And there is precedent for this. shared_ptr is NOT a serializable
type as described by the concepts - and never can be. The
implemented archives include special code for share_ptr to
work around this and make it serializable anyway. Given
the alternatives - I felt this was the best course - even
though it muddles somewhat the question of exactly
what is serializable.
So I think it's accurate to say that the current concepts describe
sufficient requirements for serializability but not necessary ones.
>> I did in fact look into using the concept library. I had a few
>> problems understanding it. To get a better idea I looked for other
>> boost libraries which used it and didn't find any.
>
> Then you didn't look very hard.
>
>> (I took a special look at the iterators library!).
> The iterators library does in fact use it (though probably not
> everywhere it should). The Graph library uses it all over the place.
>
I just looked again. I found ONE file in all of boost
which includes boost/concept/requires.hpp. (That
was in boost/graph/transitive_reduction.hpp) I found
no such inclusions anywhere else - including the iterators
library. So though I don't doubt that concepts are used
through out boosts, I can't see where the concept library is used.
>>> For what it's worth, based on these discussions (and not a recent
>>> look at your docs, admittedly) I _think_ I can identify at least one
>>> problem with your specification and your idea of what is a proper
>>> implementation detail. Please tell me if I'm wrong:
>>>
>>> You require archives to handle all primitive types, yet there is a
>>> large class of such types for which you say the interface that
>>> creates instances, and gets and sets their values, is a private
>>> implementation detail.
>>
>> I haven't need getters/setters for any serialized types. In fact the
>> whole code base only has maybe two.
>
> I didn't say anything about getters and setters. I said "the
> interface that gets and sets their values." An interface that sets
> the value might be the assignment operator. An interface that gets
> the value might be a conversion to int.
The documenation refers to primitive C++ types. These
are all assignable and a reference can be taken on them. The
current documentation says nothing about convertability so
I think it's correct as it stands.
>>> If I have that right, it means there's no reliable way to get their
>>> bits into an archive so they can be deserialized with the same
>>> values they went in with.
>> I don't think this is an issue - at least its never come up as one.
> I think this is exactly the issue that Matthias faced.
I don't think that's the issue that Matthias faced, but he can
speak to that if he want's to.
> If you don't specify how to create a value of any given primitive type,
> how is he
> supposed to deserialize it?
These types (e.g. class_id_type, etc) are in fact created in the base
archive implemenation. References to these types are serialized so
the serialization doesn't have to construct them. The reason that
I made the defaul constructors private was to detect cases where they
were being constructed without a specific value - this would almost
certainly an error. When I made these private I in fact did detect
a couple of compile errors which represented potential errors. They
were easy to fix and that was that.
> Is anyone else implementing archives other than you? If not, he's the
> only serious consumer you have of the archive concept. As the person
> in control of both sides of that contract, you're not going to notice
> these kinds of problems if you don't have solid concept definitions
> and concept checking in place, because you are free to
> (unintentionally) make changes that subtly alter the contract.
This is true and admitidly a problem.
> This comes down to one thing: you need to decide what your public APIs
> are, and you need to have tests for all of them that don't make any
> assumptions beyond what's specified in the API. Maybe it would be
> easier to achieve if someone else were writing the tests.
Great - any volunteers?
There is one issue here that you might have overlooked.
There are two "users" here. The main one is user of any archive
already made. If he follows the requirements as stated in the
documentation he will be guarenteed that the library will work
as advertised.
The other "user" is one who makes another archive class.
If he follows the concepts as described in the documentation
he's guarenteed that it will work as advertised. Presumable
he would start with trivial_archive example as shown in the
documentation.
BUT - the documentation doesn't say much about archive
semantics. There are several examples of archives in the
documentation which implement different semantics. They
all model the concepts, but they do different things.
However, the most useful archive classes - the one's I
included for usage out of the box, implement a lot of the
sematics which make the system widely useful. serialization
of pointers, etc.... It's appealing for someone making
a new archive to leverage on this implemenation - just
as Matthias has done. This does include some facilities
which go beyond the original concepts
I've included a section in the documentation
which describes this implementation but not in a formal way.
In these implementations, I did in fact depend on the fact
that some internal types were not primitive - though
convertible to primitives. I think Matthias did the same
but I'm not sure. I think Matthias got surprised when
I removed default constructability. But he also got
surprised when I changed class_id_type from unsigned
int to least_16_t which surprised me since I thought
the latter was just a typedef and not a true class. I
also never anticipated that anyone would care about the
list of internally used types as I never needed such a
list in the archives I had already created.
In any case, make a concept for an archive called
"All encompassing archive" similar to the family that
we have would be quite a bit of work - and out of
proportion to it's value in my opinion. And suppose
I felt that it should not be necessary to provide
a comprehensive internal types and Matthias did.
We'd be back in the same soup.
Robert Ramey
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk