|
Boost Users : |
Subject: Re: [Boost-users] Serialization cumulatively.
From: Tony Camuso (tcamuso_at_[hidden])
Date: 2015-03-29 20:44:33
Greetings Robert.
Given the assistance you and the other boost cognoscenti
provided while I was developing my project, I feel that I
owe you an update.
What I decided to do in the end was to use a distributed
database model. The code generates a data file for each
preprocessed kernel source file. Rather than squashing
those together into one large database, I left them
distributed in their respective source directories.
The length of time to process the whole kernel now only
takes about 5 minutes on my desktop. The lookup utility
can find anything in less than a minute. Performance is
enhanced all around, though the size of the database
collectively is about ten times larger than if I
compressed it into one file. The trade-off of disk space
for performance was well worth it.
The project is at a decent knee-point, though there are a
few things I'm sure my fellow engineers will want to add
or change.
You can track the progress of the project at
https://github.com/camuso/kabiparser
Thanks and regards,
Tony Camuso
Red Hat Platform Kernel
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net