Boost logo

Boost :

From: Mateusz Loskot (mateusz_at_[hidden])
Date: 2020-07-26 13:48:14


My apologies for very late response to this.

On Fri, 5 Jun 2020 at 16:05, Olzhas Zhumabek
<anonymous.from.applecity_at_[hidden]> wrote:
> Hi,
> Since having precomputed images is a bit of problem both delivery wise and
> license wise, I've got a few ideas on how to test it.
> The algorithm has the following properties:
> 1. It preserves the heat inside the system (e.g. after a lot of iterations,
> all pixels will approach mean of the image)
> 2. If all pixels are at the mean value, nothing changes (minus numerical
> inaccuracies)


"this result would be the energy conservation law for the transport of heat,
or, in image processing, the conservation of the average
intensity of the given image"


> 3. Lower kappa values have more respect for edges than higher kappa values

IOW, for smaller kappa, smaller gradients can block conductivity;
it controls sensitivity of the edge stopping function;
AFAIU, in practice, higher kappa can lead to loss of edges of small details.

> Based on those, we could implement the following tests:
> Sanity checks. One for all zeroes staying zeroes, and the second having
> uniformly initialized image pertaining the values after cast to integral
> values to discard the accumulated inaccuracy.

AFAIS, this approach is already implemented in your PR 500, right?

> Mean value test. Initialize an image with random values, compute mean
> pixel, run the algorithm for a lot of iterations (10'000) with kappa ~ 30.
> It will require a small image (32x32 should work).

An interesting test case to have.
Have you tried that?

> Higher kappa vs lower kappa test. This is a relative test, e.g. it will not
> check for exact values. Instead, it will run the algorithm twice with
> difference kappas on a special image (a rectangle filled with decreasing
> values towards the center). A higher kappa value will make a mess, but
> lower one should preserve the contours of the rectangle.

I'm not sure about this approach.
If we wanted to actually test the metrics of denoising,
we should look at approach based on PSNR/MAE, e.g.
This is something interesting, but perhaps for future work.

I think we are fine with what you've got already,
that is the tests of individual components and partial operators,
combined with visual examination of the output.

Best regards,

Mateusz Loskot,

Boost list run by Boost-Gil-Owners