From: Nicholas Neumann (nick2002_at_[hidden])
Date: 2021-06-23 17:39:01
I've got two different rotating file logs that point to the same directory
(with different file name patterns). Is this in general just a bad idea?
They end up sharing the same file_collector, which seems wrong, so perhaps
that is a clue that I shouldn't have my logs set up like this.
In production I've got a service that compresses, archives, and manages the
size of the logs. But in dev, I don't, so the number of files in the
directory slowly grew. But the startup time for my program grew much
faster. On windows the scan_for_files function in the collector has a loop
that is O(mn), where m is the number of files in the directory, and n is
the number that matched in previous calls to the scan_for_files function
This means the scan_for_files for the first rotating file log in the
directory has no issue (n is 0), but the second can be problematic. It
iterates over the files in the directory and for each file in the
directory, it calls filesystem::equivalent on all of the matches from
previous scan_for_files calls. On windows, filesystem::equivalent is
particularly heavy, opening handles to both files.
Thoughts? Is the two file logs getting the same collector the real issue?
Or is it my pointing two file logs to the same directory? I see some ways
to mitigate the slowdown in scan_for_files - e.g., filesystem::equivalent
could be called after all of the method/match_pattern check, but the two
file logs sharing the same collector feels like the real issue.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk