|
Boost : |
From: Andrzej Krzemienski (akrzemi1_at_[hidden])
Date: 2020-09-15 23:00:45
wt., 15 wrz 2020 o 18:09 Vinnie Falco <vinnie.falco_at_[hidden]> napisaÅ(a):
> On Tue, Sep 15, 2020 at 8:47 AM Andrzej Krzemienski via Boost
> <boost_at_[hidden]> wrote:
> > ...Boost Review puts
> > the bar high for the libraries, so I guess this question should be
> > answered: what guarantees can a JSON library give us with respect to
> > accuracy of numbers?
>
> This is a great question. My philosophy is that there can be no single
> JSON library which satisfies all use-cases, because there are
> requirements which oppose each other. Some people for example want to
> parse comments and be able to serialize the comments back out. This
> would be a significant challenge to implement in the json::value
> container without impacting performance. It needs to be said up-front,
> that there are use-cases for which Boost.JSON will be ill-suited, and
> that's OK. The library targets a specific segment of use-cases and
> tries to excel for those cases. In particular, Boost.JSON is designed
> to be a competitor to JSON for Modern C++ ("nlohmann's json") and
> RapidJSON. Both of these libraries are wildly popular.
>
> Support for extended or arbitrary precision numbers is something that
> we can consider. It could be added as a new "kind", with a custom data
> type. By necessity this would require dynamic allocation to store the
> mantissa and exponent, which is fine. However note that the resulting
> serialized JSON from these arbitrary precision numbers is likely to be
> rejected by many implementations. In particular, Javascript in the
> browser and Node.js in the server would reject such numbers.
>
> As a goal of the library is suitability as a vocabulary type,
> homogeneity of interface (same integer and floating point
> representation on all platforms) is prioritized over min/maxing (using
> the largest representation possible). The cousin to homogeneity is
> compatibility - we would like to say that ANY serialized output of the
> library will be recognizable by most JSON implementations in the wild.
> If we support arbitrary precision numbers, some fraction of outputs
> will no longer be recognized. Here we have the aforementioned tension
> between features and usability. Increasing one decreases the other.
>
So, I can see the design goals and where they come from. For the record, I
am not requesting for the support of arbitrary-precision numbers. This is
just my way of trying to determine the scope of this library. I would
appreciate it if you said something similar in the docs in some "design
decisions" section. To me, the sentence "This library provides containers
and algorithms which implement JSON" followed by a reference to Standard
ECMA-262 <https://www.ecma-international.org/ecma-262/10.0/index.html>
somehow implied that you are able to parse just *any* JSON input.
That high-level contract -- as I understand it -- is:
1. Any json::value that you can build can be serialized and then
deserialized, and you are guaranteed that the resulting json::value will be
equal to the original.
2. JSON inputs where number values cannot be represented losslessly in
uint64_t, int64_t and double, may render different values when parsed and
then serialized back, and for extremely big number values can even fail to
parse.
3. Whatever JSON output you can produce with this library, we guarantee it
can be passed by any common JSON implementation (probably also based on
uint64_t+int64_t+double implementation.
Regards,
&rzej;
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk