I'm decoding a Java Web Token, composed of 3 separate base64 encoded parts (header.payload.signature).

Before I was using Boost.Serialization:

    using namespace boost::archive::iterators;
    using It = transform_width<binary_from_base64<std::string::const_iterator>, 8, 6>;
    std::string tmp(It(std::begin(base64_str)), It(std::end(base64_str)));
    return boost::algorithm::trim_right_copy_if(tmp, [](char c) { return c == '\0'; });

But since I'm already using Boost.Beast, I thought I'd drop the above dependency, and use this instead:

    using namespace boost::beast::detail;
    std::string decoded(base64::decoded_size(base64_str.size()), '\0');
    auto rc = base64::decode(decoded.data(), base64_str.data(), base64_str.size());
    decoded.resize(rc.first);

But turns out some (all?) JWTs do not include the trailing equals that "proper" base64 should have.
And in that case, Beast's rc.first does not include the last character, which I can still see in
decoded variable prior to the .resize() in the debugger.

Here's the payload part: eyJwYXNzd29yZCI6IlAiLCJ1c2VybmFtZSI6IlUifQ
What it should decode to: {"password":"P","username":"U"}
What Beast's decode yields: {"password":"P","username":"U"

Is this on purpose? Could Beast's base64::decode() be changed to be more lenient about the absence of trailing equals from the base64 encoded string?

JWT handling seems like a natural fit within the context of Beast, and although
this is an implementation detail, it would seem logical for decode() to cope with that, no?

Thanks, --DD