Boost logo

Boost :

Subject: Re: [boost] [log] Release candidate 1 usage question
From: Alex Perry (Alex.Perry_at_[hidden])
Date: 2013-03-11 11:00:33


Hi,

I've been following with interest the development of this Log v2 code which looks very useful. However having read through the documentation I can't see any easy way of solving the following issue. I'm wondering if I'm just missing something obvious so thought I'd ask.

I was hoping to replace some existing logging in an application which has been built on top of log4cpp - as far as I can see all the functionality of log4cpp we use is included (or some equivalent behaviour) in the boost log v2 and in a much more "natural looking" C++ idiom. However I can't see any easy way of solving the main problem we have with our current logging which is why I am looking to change it in the first place.

The application is delivered as a group of related processes rather than as a single individual process - the logging is currently configured for this group and so messages need to be formatted with a source pid as well as a source tid - then grouped together and output to the appropriately configured streams (be it to a rolling file appender or syslog appender to use log4cpp names).

Currently this is implemented by having one process taking the role of logging-server which is responsible for reading and managing the configuration for log4cpp, setting up the appropriate named loggers and appenders, then reading output from a shared memory circular buffer. The other logging-client processes then simply write their log output to this shared memory. Other than a bit of faffing around with timestamps to make sure that the timestamp recorded for a log message is the one when the message was written to the shared memory rather than when it was written to the log4cpp layer by the server, the server simply reads the shared memory and outputs the log messages to the current log4cpp configuration.

The biggest issue with this setup is supporting the "filtering" required of log messages - obviously there is quite a large overhead in formatting a log message, locking the shared memory and writing to it. To avoid this overhead the server process writes the maximum severity of messages required for a "named log stream" to the shared memory - in the client processes this severity is checked before proceeding with the formatting of the log message. However unfortunately there is currently only support for a fixed number of these "named streams" - this actually means in practise (though wasn't the original design intention) that all logging output is written to one of 2 named log streams "trace" and/or "audit". Whilst this has worked well enough up to now enabling "trace" at "DEBUG" level can generate huge amounts of logging information which can be very time-consuming to parse when diagnosing a problem.

I would like to go to a much more granular approach in the logging - so each component or sub-process, particularly for the general purpose "trace", "DEBUG" log messages, has its own name and may be switched on independently - preferably dynamically rather than by being loaded from some static configuration file.

Having looked through the boost.log documentation I can't see anything that directly supports this type of multi-process logging. Whilst I'd be very happy to write a sink backend which writes to some shared memory in a similar manner to our current implementation and a custom log source to read from this and write in one "elevated" process , I'm not quite sure that this is the correct design. It appears to me that I really want to share the majority of the configuration of the logging "core" between the processes. Ie the configuration of filtering in particular needs to be shared between the processes.

I see that support for loading the configuration of the core from a boost.ptree already exists. Would a good approach be to get the "server" process to write the configuration into a ptree which is then serialised and passed to the other processes by some mechanism (+some way of signalling that a change to logging configuration has occurred)? Or is there some way that the logging core could be maintained in boost interprocess containers or some such avoiding complex message passing between the processes (actually probably a bad idea - how do you avoid locking contention in the filtering check)?
 
I can't imagine that I'm the only one ever to want this sort of "logger server/daemon" behaviour however what is other peoples experience with this sort of logging? Is it just simpler to let each process write to its own separate log for debug tracing? Whilst this will make for more searching when diagnosing issues (which process actually fulfilled the particular request I'm interested in) the simplicity will win out provided that more general "audit" messages are collated and in a single output to make finding which log output to search possible.

Alex


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk