|
Boost Users : |
From: Mark Van De Vyver (mvdv_at_[hidden])
Date: 2005-12-11 03:29:40
Hi,
I'd appreciate any advice on how to acheive the following using iostreams:
1) Read in part of a compressed file.
2) Decompress the chunk just read.
3) Pass this to boost::tokenizer (or boost::spirit?)
4) Parse the decompressed text, one line at a time.
1) Read in the next chunk of compressed data.
The docs don't seem to have this as an example, and I'm a novice at
programming so am a little wary of trying something 'off-the-cuff'.
It seems 4) can be a little sticky - when the chunk of compressed data
does not end at the 'end of a line' in the decompressed data....
I'd rather spend some time becoming familar with iostreams - it has more
'head-room'. Nonethelss, this may not be what iostreams was designed
for, so will it be more effective to use the the zlib library, using
gzgets()?
Finally, looking at some posts it seems that if using boost::iostreams
the correct buffer is an array_source and array_sink?
Would appreciate any suggestions.
Regards
Mark
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net