I have an unusual use case for boost.serialization, and I was wondering if it would be possible to adapt it to my needs:

- I have a set of over 100 types, and instances of each are generated asynchronously then serialized to a file in that order. 
- The most interesting serialized data will be written just before the power is unexpectedly cut.
- I need to load in and run on as much data as possible when reading the serialized data back, ignoring incomplete data at the end (due to a power cut). 
- The basic Boost serialization examples require you to know the type of the next piece of data to be loaded when reading. Since these types are generated asynchronously they are not known in advance.  
- I need to write the data out immediately when it arrives because of the power issue. 
- Files will be getting up to around 150GB in size for binary archives, so it can't be marshaled in memory, it needs to be written immediately even if it is redundant.

Is there a way to read in that serialized file using the facilities provided in boost.serialization?

I could also serialize an index or custom headers indicating the next type to appear, I would prefer to avoid doing so.

One way of achieving some of these goals is writing one piece at a time using a binary archive to an fstream. But I don't know what aspects of my requirements will prove to be a problem.

Thanks for your thoughts.

Cheers!
Andrew Hundt