|
Boost : |
Subject: Re: [boost] GIL io_new review
From: Phil Endecott (spam_from_boost_dev_at_[hidden])
Date: 2010-12-09 11:29:50
Domagoj Saric wrote:
> "Phil Endecott" <spam_from_boost_dev_at_[hidden]> wrote in message
> news:1291823931996_at_dmwebmail.dmwebmail.chezphil.org...
>>> Example code demonstrating a possible (skeleton) solution:
>>> http://codepad.org/WD7CpIJ8 ...
>>
>> I don't really follow what that code is doing,
>
> Hmm...if I understood your initial example/use case correctly you have a
> huge TIFF comprised of 5k-x-5k tiles that you need to load, edit and then
> rechop into 256x256 PNG files...
Not quite; I have a 1e12 pixel image which is supplied as a few
thousand 5000x5000 TIFF files. (Presumably the confusion is because
TIFFs can in theory be divided into tiles internally; this is not what
I'm referring to. I have an image which is supplied as tiles, each of
which is a separate file.)
>> and it's not obvious to me what its memory footprint will be.
>
> Well, with io2, it all depends on the backend...if the backend is 'smart
> enough' to read in only the required parts of an image (and WIC
> theoretically/as per documentation should be) the footprint should be
> obvious
Right, it's not obvious to me.
> This example seems different from the one you gave in the first post (or I
> misunderstood both)...Now you seem to have an image that is (1400*5000)
> pixels wide and 1300000 pixels tall and that is not actually a single file
> but the 5k-x-5k tiles preseparated into individual files
Right.
> ...and it misses the 'editing' logic
Right. That's largely unimportant for this discussion.
> and saving to 256x256 PNGs
Well I omitted the TiledWriteImage implementation but it would be
similar in organisation to the TiledReadImage. Here you are:
class TiledWriteImage {
typedef shared_ptr<WritePng> writepng_ptr;
writepng_ptr images[27345]; // !!!
int rownum;
void open_next_row() {
for (int c=0; c<27345; ++c) {
images[c] = new WritePng(output_tile_filename(c,rownum/256));
}
}
public:
TiledWriteImage(): rownum(0) {}
write_row(const pixel_t* data) {
if (rownum%256==0) {
open_next_row();
}
for (int c=0; c<27345; ++c) {
images[c]->write_row(data+256*c);
}
++rownum;
}
};
> ...As I don't see what it is
> actually trying to do with the input data I cannot know whether you actually
> need to load entire rows of tiles (the 1400 files) but doesn't such an
> approach defeat the purpose of tiles in the first place?
No. Hmm, I thought this code was fairly obvious but maybe I'm making assumptions.
There is a balance between
- Buffer memory size
- Number of open files
- Code complexity
Options include:
1. Read the entire input, then write the entire output. This uses an
enormous amount of memory, but has only one file open at any time, and
is very simple.
2. Read and write row at a time (as shown). This uses a very modest
amount of memory, but requires a very large number of files to be open
at the same time. It's still reasonably simple.
3. Read and write 256 rows at a time. This uses an acceptable amount
of memory (less than 1 GB), and requires 1400 input files to be open,
but only 1 ouput file. The complexity starts to increase in this case.
4. Read and write 5000 rows at a time. This requires a lot more RAM
(15 GB) but I can have only 1 file open at a time. This is getting
rather complex as there is some wrap-around to manage because 5000%256!=0.
5. Lots of schemes that involve closing, re-opening and seeking within
the images. These will all be unacceptably slow.
There is also the issue of concurrency to think about. The code that
I've posted can be made to work reasonably efficiently on a shared
memory multiprocessor system by parallelising the for loops in the
read/write_row methods. It's also possible to split over multiple
machines where the tile boundaries align, which is every 160,000
pixels; that lets me divide the work over about 50 machines (I run this
on Amazon EC2).
The essential issue is that I believe it's unreasonable or impossible
to expect the library code to do any of this automatically - whatever
its documentation might say, I don't trust that e.g. "WIC" will do the
right thing. I want all this to be explicit in my code. So I just
want the library to provide the simple open/read-row/write-row/close
functionality for the image files, and to supply types that I can use
for pixel_t, etc.
> I can only, as a side note, say that the shared_ptrs are an overkill...in
> fact, if the number 1400 is really fixed/known at compile-time the
> individual ReadTiff heap allocations are unnecessary...
Please, think of this as pseudo-code if you like. Any shared pointer
overhead will be swamped by the time taken in the image encoding and decoding.
>> To get the ball moving on GIL extensions, and because this is better than
>> what GIL currently has. Past experience suggests that the existence of
>> this io extension will not prevent other similar and incompatible things
>> (i.e. yours) from being accepted in the future.
>
> Can I ask to which librar(y/ies) does your past experience refer to, as to
> me the practice seems quite the opposite (e.g. even after years of
> complaints and different proposals Boost.Function still has not changed, or
> the fact that we have two signals and two regex libraries)...
Well having two signals and two regex libraries is a perfect example of
a case where the existence of one solution has not prevented another
similar and incompatible solution from being accepted later.
Regards, Phil.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk