Subject: Streamlining benchmarking process
From: Olzhas Zhumabek (anonymous.from.applecity_at_[hidden])
Date: 2019-05-09 09:09:46
I thought about introducing a benchmarks folder and some build scripts to
streamline benchmarking process.
First, let me list the problems that this will hopefully solve, in
decreasing order of importance:
1. Simplify performance issue submission
2. Make it easy to get rough approximation of original environment (those
that can be made using code only)
3. Quickly accept or reject issue (e.g. if it is caused by GIL itself or
some environment issue)
4. Check if performance degraded significantly
Now let me list the cons of the idea that I have thought about:
1. There is not much to benchmark yet
2. Arrival frequency of performance related issues is very low
I've wondered around boost libraries and it seems like ublas has a
benchmarks folder, but uses homegrown benchmarking facility, which might
slightly complicate reproduction.
I propose the following changes:
1. Create benchmarks folder in root of GIL.
2. (Optional) write some simple benchmark to check if google-benchmark is
3. Write build scripts (jamfile, cmake+conan) to provide an option to build
benchmarks and optionally install google-benchmark using conan
4. Import all existing performance issues into that folder
5. Mention in contributing.md that performance issues should preferably be
reproduced in that folder as google benchmark and the results embedded into
What do you think?
Is this idea even worth it?
Or it could be put a bit further into to-do list?
If worth it, what changes exactly should I introduce?
Boost list run by Boost-Gil-Owners