|
Boost : |
From: Alexander Grund (alexander.grund_at_[hidden])
Date: 2024-01-12 08:50:14
Am 12.01.24 um 03:16 schrieb Robert Ramey via Boost:
> a) Treating Boost "as a unit" and testing on this basis results in an
> amount of work which increases with the square of the number of
> libraries.
How does introducing another dimension (versions of other libraries)
help in this regard?
Now you don't only have work of num_libs^2 but also times the number of
versions of each library.
> b) Modifying libraries to pass tests in other libraries make one
> library dependent on another which might not be obvious. Libraries
> should be tested individually as a unit to prove that the the
> implementation of the library matches faithfully implements it's
> expored interface.
Don't we do that (testing libraries individually) already? What else is
the purpose of each libraries "test" folder?
Where does "Modifying libraries to pass tests in other libraries" happen?
So far I only observed this when a consuming library exposed a bug in
the consumed library which is totally fine, isn't it?
So we actually gain something by testing the whole: Not only have we
unit tests of the library but the unit tests of the consuming library
acts as an integration test of the consumed increasing test coverage. We
"eat our own food" so to say.
> In any case, users should be able to download any number and/or
> combination of libraries (along with their dependencies) and use just
> that. This will avoid making uses applications more complicated than
> they already are.
But interfaces do change. See below
> If some library/app depends on some other library subject to some
> dependency of compiler level, etc. and that requirement is
> unfullfilled, It should result in a compile time error. Our config
> library - a masterpiece by John Maddock - is designed to address this
> very problem and does so very well.
Well that is a great example why mixing Boost library versions does not
work:
Boost.Config has a growing list of macros such has `BOOST_NO_FOO` and
most Boost libraries use that to enable, disable or change features.
If a user now uses a newer Boost.X with an older Boost.Config where that
macro didn't exist at all (yet), then Boost.X will fail to compile or
run into known bugs at runtime (e.g. when workarounds were implemented
depending on whether that defect exists, i.e. that macro is defined)
Our current CI (and release process) tests each Boost library using a
specific (minimum, in case of CI) version of the other Boost libraries
it depends on.
>> I think this is overlooking the fact that the Boost release process
>> *works well* right now. Three releases every year like clockwork and
>> they are
>> pretty high quality in terms of having minimal inter-library defects.
>
> I don't dispute this. But - it doesn't scale and can never scale.
> That's what started this discussion in the first place.
What exactly doesn't scale?
The goal of the "modularization" should be to be able to consume a Boost
release piecewise and it looks like this works quite well. Checking the
package manager in Ubuntu I see libboost-regex1.74.0,
libboost-thread1.74.0, libboost-filesystem1.74.0, etc.
I.e. individual libraries of a single Boost release.
If that is what you wanted with
> But if we finished the "boost modularization", the "global" release
> would be nothing more than the union of the individual ones and
> guarenteed to be correct.
then I totally agree. But we already have that, don't we?
And if someone doesn't want to download the whole "global release"
tarball, they can download the individual libraries from the repos at
Github using the same tag. As only those of the same tag are "guarenteed
to be correct" (as far as possible).
The only issue left I see is that all boost headers need to be in the
same include folder. The CMake build already has that solved and AFAIK
B2 soon will follow, if it didn't already.
Alex
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk