From: Gavin Lambert (boost_at_[hidden])
Date: 2020-09-23 08:39:57
On 23/09/2020 15:45, Krystian Stasiowski wrote:
>> On 23/09/2020 05:55, Vinnie Falco wrote:
>>>> I also don't see the point of supplying a size argument to on_key* and
>>>> on_string, as this can be read from the string_view argument instead.
>>> Right, so this size argument is actually the total number of
>>> characters, while the string_view::size is the number of characters in
>>> the current piece.
>> Which is a poor design choice, as I said.
> How? The goal is to provide as much information as possible to the handler.
> Forcing users to track it themselves would introduce redundancy.
That was in the context of stating that requiring that users of
basic_parser need to self-assemble multiple fragments of text was a poor
design choice, vs. handing them the complete preassembled value text.
I've walked back on that one a little bit; I now think that it's ok for
basic_parser to do that (to retain a zero-allocation guarantee) but in
that case it should treat numbers the same way (which it currently
doesn't), and ideally there should be a standard slightly-higher-level
parser that has a similar interface but does preassemble the values,
eliminating the _part callbacks (for people who prefer ease of use over
>> I don't really understand why. basic_parser doesn't actually store any
>> of the keys/values (especially as it doesn't merge value parts), so it
>> should have no reason to impose any kind of limit. This just seems like
>> it's doing an extra count/comparison that could be removed entirely, and
>> removing it would naturally improve performance.
> basic_parser only checks the limit; it does not enforce it. The handler is
> responsible for determining what that limit is. I suppose that we could add
> a little TMP that elides the check statically, but I don't see too much of
> a point in doing so.
I don't see the point in checking the limit in the first place. Either
the handler cares, in which case it can easily count calls to value
callbacks within an on_array_begin/on_array_end block itself (or just
check the count at the end), or it doesn't care, in which case it's a
waste of time for basic_parser to check.
It also assumes that all parts of the object tree have the same limits,
which seems odd (in the case where something does care). I guess the
main issue is that nowhere in the docs can I find any justification for
their existence or why you might ever want to set them.
Overall, basic_parser itself feels more like an interface that fell out
of json::parser rather than one that was deliberately designed on its own.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk