Boost logo

Boost :

From: Mathias Gaunard (mathias.gaunard_at_[hidden])
Date: 2020-09-21 13:03:40


This is my review of Boost.JSON by Vinnie Falco and Krystian Stasiowski.

I believe that the library should be REJECTED, though it would me a
much better candidate with just a few minor tweaks here and there.

The library provides the following components:
- a parser framework
- a DOM to represent/construct/edit trees of JSON-like values (json::value)
- a serializer for that type

Unfortunately, all three of these components are not quite
satisfactory when it comes to addressing the problem domain of JSON as
an interchange format.
JSON is ubiquitous technology, and I feel like this library is not
deserving of a name such as Boost.JSON, which would suggest "the right
way to do JSON as seen by the C++ community".

Parsing is arguably the most important feature of any such library.
However with Boost.JSON it really feels like it is an afterthought,
and only its coupling with json::value was considered.

These are the main issues:
- The lack of proper number support from the DOM apparently propagates
to the parser as well, even though there is no cost for the parser to
properly support them.
- The interface is closely tied to a text format and doesn't support
JSON-inspired binary formats. This kind of forward thinking is what
would really elevate those parsers.
- The fact that only a push parser is provided. Parsing into own types
usually requires a pull parser in order for nested member's parsers to
combine into the outer parser.

The way the parser interfaces handles numbers is not only inconsistent
with the rest of its interface but also limiting.
            bool on_number_part( string_view, error_code& ) { return true; }
            bool on_int64( std::int64_t, string_view, error_code& ) {
return true; }
            bool on_uint64( std::uint64_t, string_view, error_code& )
{ return true; }
            bool on_double( double, string_view, error_code& ) { return true; }

I would expect an interface that looks like
            bool on_number_part( string_view, std::size_t n,
error_code& ) { return true; }
            bool on_number( string_view, std::size_t n, error_code& )
{ return true; }
            bool on_int64( std::int64_t, error_code& ) { return true; }
            bool on_uint64( std::uint64_t, error_code& ) { return true; }
            bool on_double( double, error_code& ) { return true; }

I see the push parser nicely integrates with beast/asio segmented
buffering and incremental parsing, which is nice.

The DOM provides a canonical or vocabulary type to represent any JSON
document, which is useful in itself.
The DOM does not support numbers that are neither 64-bit integral
numbers nor double-precision floating-point. This is a common
limitation, but still quite disappointing, since the proposal of it
being a canonical type is quite tainted by the fact it can't represent
JSON data losslessly. I think it would be better if another type was
provided for numbers that don't fit in the previous two categories.

It uses a region allocator model based on polymorphic value, which is
what the new polymorphic allocator model from Lakos et al. was
specifically intended for. It puts some kind of reference-counting for
life-extension semantics on top, which I'm not sure I'm a big fan of,
but that seems optional.

Querying capabilities for that type are pretty limited and quite
un-C++ like. Examples use a C-style switch/extractor paradigm instead
of a much safer variant-like visitor approach.
There is no direct nested xpath-style querying like with boost
property tree or other similar libraries.

It feels like the interface is only a marginal improvement on that of rapidjson.

There is no serializer framework to speak of: just a serializer for
the DOM type json::value, i.e. the only way to generate a JSON
document is to generate a json::value and have it be serialized.
A serializer interface symmetric to that of the parser could be
provided instead, allowing arbitrary types to be serialized to JSON
(or any other similar format) without conversion to json::value.

When it comes to serializing floating-point numbers, it seems it
always uses scientific notation. Most libraries have an option so that
you get to choose a digit threshold at which to switch.
Integers always use a fixed representation, which makes this all quite

Review questions:
> - What is your evaluation of the design?
I can see the design is mostly concerned with receiving arbitrary JSON
documents over the network, incrementally building a DOM as the data
is received, and then doing the opposite on the sending path.
It is good at doing that I suppose, but however is otherwise quite
limited as it is unsuitable for any use of JSON as a serialization
format for otherwise known messages.

> - What is your evaluation of the implementation?
I only took a quick glance, but it seemed ok. Can't say I'm a fan of
this header-only idea, but some people like it.

> - What is your evaluation of the documentation?
It is quite lacking when it comes to usage of the parser beyond the DOM.
It also doesn't really say how numbers are formatted in the serializer.

> - What is your evaluation of the potential usefulness of the library?
Limited, given the scope of the library and the quality of its
interface, I don't see any reason to use it instead of rapidjson,
which is an established library for which I already have converters
and Asio stream adapters.

> - Did you try to use the library? With which compiler(s)? Did you have any problems?
I didn't.

> - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
Maybe half a day reading through the documentation and the mailing list.

> - Are you knowledgeable about the problem domain?
I have used a lot of different JSON libraries and I have written
several serialization/deserialization frameworks using JSON, some of
which are deeply tied into everything I do everyday.

Boost list run by bdawes at, gregod at, cpdaniel at, john at