Boost logo

Boost :

From: Mathias Gaunard (mathias.gaunard_at_[hidden])
Date: 2020-09-21 16:44:26

On Mon, 21 Sep 2020 at 16:48, Vinnie Falco <vinnie.falco_at_[hidden]> wrote:
> On Mon, Sep 21, 2020 at 8:26 AM Mathias Gaunard
> <mathias.gaunard_at_[hidden]> wrote:
> > To make people happy I think it's easier to provide a separate
> > incremental push parser and a non-incremental pull parser.
> > I don't think those should be in separate libraries, consistency is key.
> I disagree. I don't have any special expertise to bring to the table
> when it comes to pull parsers. I don't need them for the applications
> and libraries that I am developing. However I am proficient at writing
> the kind of code needed for push parsing, and implementing containers
> (having already done so many times in the past). If I were to
> implement a JSON serialization library (which is what you are asking
> for) I would be doing so basically from a fresh start. What I am
> saying is that this type of implementation does not play to my
> strength. There have already been individuals who proclaim to be
> subject matter experts in this area - they should write this library.
> "Consistency" is a vague goal. But here's what's not vague. Having
> JSON serialization in its own library means it gets its own Travis
> limits for individual forks. CI turnaround is faster. Tests run
> faster. The documentation is focused only on the serialization
> aspects. Users who need only JSON serialization, can use the
> particular library without also bringing in a JSON DOM library. And
> vice versa. To me, this separation of concerns has more value than
> "consistency."
> I also have to wonder, to what consistency do you refer? The parser
> will not be consistent. One is pull, the other is push. The DOM will
> not be consistent. One has it, the other doesn't. Serialization is in
> a similar situation. In fact, when you compare a JSON DOM library to a
> JSON Serialization library, they are about as different as you can
> imagine, sharing only the nominal serialization format. Not much code
> would be shared between a DOM and a serialization library.

Maybe pull parser is not the right terminology, the parser in question
would not have any DOM.

It would be something like.

template<class T>
struct deserialize_impl;

template<class T, class Deserializer>
T deserialize(Deserializer& s)
    return deserialize_impl<T>::impl(s);

struct Bar
    int value;

struct Baz

struct Foo
    Bar bar;
    string baz;

struct deserialize_impl<Bar>
    template<class Deserializer>
    static Bar impl(Deserializer& s)
        int value = std::stoi(s.number());
        return Bar{std::move(value)};

struct deserialize_impl<Foo>
    template<class Deserializer>
    static Foo impl(Deserializer& s)
        Bar bar = deserialize<Bar>(s);
        string baz = s.string();
        return Foo{std::move(bar), std::move(baz)};

(possibly with shorthand syntax to simplify redundant patterns, such
as automatic object_begin/object_end and deserialize function variant
with automatic key, which would allow constructing directly)

You can see the name and sequence of the events is still perfectly
in-line with the push parser.
Serialize could also be written in a symmetric fashion.

> I see nothing wrong with Boost.JSON being the library that users go to
> when they want a container that is suitable as an interchange type.
> And I see nothing wrong with someone proposing a new library that
> implements JSON serialization to and from user-defined types, with
> pull parsers, no DOM, and maybe even quotes from Aleister Crowley in
> the documentation. Having fine-grained libraries which target narrow
> user needs is something we should all agree is good for Boost (and C++
> in general).

The problem with this is that you end up having to pick a million
different things to solve very small problems all in slightly
inconsistent ways. There is a cost with adopting any new library or
API, both technical and mental.
In 98% of cases, when I write a websocket interface (with boost beast,
kudos to you), I define my messages as C++ types, and I parse incoming
JSON into those types. I could parse the incoming JSON into
json::value, and either use it directly, but then I lose static naming
and typing, or I could convert it into my types, which is an
unnecessary step which can even introduce loss of data.
In the 2% remaining percent, I write arbitrary data to some log or
record and I need to be able to visualize it or export it to a
database. In that case, I don't know the schema of the information
ahead of time, and json::value is therefore what I want to use.
In both cases, I'd like to read/write my data from/to JSON with the
same framework.

Boost list run by bdawes at, gregod at, cpdaniel at, john at