Ruediger Berlich writes:<br> <br>> Hi there,<br> ><br> > I am in the process of speeding up communication between a server and its<br> > clients. Communication involves serialized class data. Messages can be as<br> > large as 100 kilobytes.<br> <br> > I have done some measurements which have shown that, in a cluster with<br> > Gigabit networking, most overhead of the parallelisation seems to come from<br> > the Broker infrastructure and the process of (de-)serialization. Network<br> > latency and/or bandwidth seems to play only a minor role in this<br> > environment.<br> <br> > Hence, apart from optimizing my broker, I'm looking for ways to optimize the<br> > serialization process, as used in my application. As messages are discarded<br> > as soon as they reach the recipient, versions of serialized data do not play<br> > an important role.<br><br>I don't find any mention of "message" in the 1.42 Boost <br>Serialization documentation.� Are you using MPI? <br><br>The obvious thing you've not mentioned is compression.� I use <br> bzip2 to compress and decompress data -<br><br>http://webEbenezer.net/misc/SendCompressedBuffer.hh<br>http://webEbenezer.net/misc/ReceiveCompressedBuffer.hh<br> <br>Besides using bzip2, I use variable-length integers to encode <br>the size of the compressed data.� It works to use a fixed-size<br>integer as well, but frequently you can shave a couple of <br>bytes off the total by using a variable-length integer.<br> <br> <br>Brian Wood<br>http://webEbenezer.net<br>(651) 251-9384<br><br>"The kingdom of heaven is like a treasure hidden <br>in the field, which a man found and hid again; <br>and from joy over it he goes and sells all that <br> he has and buys that field."<br><br>