|
Boost Users : |
Subject: Re: [Boost-users] boost serialize - stack overflow for large problems
From: Robert Ramey (ramey_at_[hidden])
Date: 2010-09-28 13:02:16
"Jörg F. Unger" wrote:
> I have implemented the serialization routine for a rather complex
> structure with many nested classes. The serialization seems to work
> for small examples, however, for bigger data sets, I get a stack
> overflow error. If I increase the stack size from 8MB to 20MB, the
> examples seems to work. However, I'd like to optimize the code the
> way that I do not need to increase the stack size (especially, since
> not all users can do that themselves)
Are you referring to compile time or runtime?
> So here are my questions:
> 1. Why is the required stack size different for different data sets
> different. The structure of the datasets is identical, only the number
> of data sets differs - so why is the recursive level different? The
> only thing I can imagine is, that I have a class (container) with a
> vector of objects. Each object stores a pointer to the container
> class. Is it possible that once serialization in the container class
> starts, that there is a recursive serialization pattern? This seems
> to be support by the fact that the objects in the container class are
> not serialized in the same way as they are stored in the container.
The stack depth should be proportional to the depth of your
class data. It should be easy to check this. Setup your debugger
to trap on stack overflow. When it does, show a back trace. You
should be able to easily determine the source of the stack usage.
I could be some deeply recursive structure, a coding error or
who knows. Do this analysis is much more time efficient than
me trying to guess what the problem might be.
> 2. How can I try to decrease the required stack size for the
> serialization routines and is there a way to estimate the required
> stack size for a specific problem a priori, so that I can throw an
> exception with an error message manually instead of having a
> segmentation fault.
If the source is some sort of error - then there is not problem
once the error is fixed.
If the source is that there is VERY deep nesting of data structures,
you'll just have to refactor the data. This isn't a hardship since
you would likely have other issues generated by this besides
serialization.
Robert Ramey
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net