Boost logo

Boost Users :

Subject: Re: [Boost-users] [serialization] performance issue when deserializing long string in a xml file
From: jean-charles.quillet_at_[hidden]
Date: 2012-12-12 04:48:31


> De : De la part de Jeff Flinn
> Envoyé : mardi 11 décembre 2012 19:56
>
> You can avoid some of the above multiple allocations, copies and
> traversals of data by composing the appropriate combination of
> boost::iostream filters and sinks/sources.

I'm not sure what you mean here. I've got one file_source and one
multichar_input_filter to watch the progress. But it doesn't seem
to come from here when I desactivate the filter, it is as long as
before.

> Using the text archive would certainly reduce the overall archive size,
> avoid the need for base64 converision and simplify parsing during
> de-serialization.

I like the XML archive as it makes it easy to edit if needed. This is not
true with the text archive. If I wanted something uneditable I'd rather
use the binary archive which serialize my matrix very fast.

> Profiling the operation should help otherwise.

I'm not sure how to achieve that. On linux I'm using gprof, but unfortunatly
I'm working on windows. Any free profiling tool I can use on Windows ?

Anyway, when I go step by step in the serialization code all the time is actually
spent in the serialization line:

        ar & BOOST_SERIALIZATION_NVP(matrix_str);

And my filter report that almost all data are read from the file. If I stop my
debugger, I end up in the middle of boost spirit functions. Seems to me that it
has something to do with the parsing...


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net