|
Boost : |
Subject: Re: [boost] [gil] New IO release
From: Mateusz Loskot (mateusz_at_[hidden])
Date: 2010-10-18 12:46:33
On 18/10/10 15:53, Christian Henning wrote:
>>> Why not use std::streams
>>
>> Streams are _evil_ ...
>> http://lists.boost.org/Archives/boost/2010/01/160911.php ...
>>
>>
>>> and what are the alternatives?
>>
>> In my book anything is an alternative to screams :D
>>
>> OTOH, only providing a streams based interface as an additional option is
>> usually just fine (as opposed to forcing a streams only based interface)
>>
>
> Don't wanna get into a "streaming" fight here. Point is noted. Did you
> know there are several fast streams implementation available on the
> net? They might change some arguments.
It's also not clear to me what's wrong with use of streams
in such case as in Boost.GIL where no character formatting, no use of
manipulators but read/write calls (binary mode).
>>>> Unlike io and io_new it uses objects that represent on-disk images
>>>> ("formatted images" in io2-speak). This has several advantages:
>>>
>>> Could you go into more details? What are "formatted images"?
>>
>> Well, images in different 'formats' like JPEG, PNG, GIF...obviously not a
>> very clever name that I just came up with to somehow distinguish them from
>> 'raw'/in-memory images (like gil::image<>)...you are more than welcome to
>> come up with a more intuitive name ;)
>
> How about in-memory vs. of-file? ;-)
Raw image is not necessarily in-memory image.
To distinguish image source location, I'd rather vote for in-memory.
>>> I would love to know how you seek through a image using a 3rd party
>>> lib, for instance with libjpeg. Right now I'm just reading and
>>> discarding unwanted regions. Not the most ideal solution to say the
>>> least.
>>
>> With LibJPEG you cannot really/literally skip unwanted data but you can make
>> the library read the unwanted data using faster/less precise methods which
>> is what I currently do (and admittedly use a bit of LibJPEG's internal
>> implementation detail knowledge for that)...
>
> Cool, I would love to know more. Can you point where in your code you do that?
I may be wrong, but presumably it's done by (re)setting parameters
like note colour dithering to perform faster decompression.
>>> That sounds great. I have to look at how you deal with certain
>>> functionalities, like reading ROI in a jpeg image.
>>
>> Actually, I forgot to mention, with the LibX (JPEG, PNG, TIFF) backends I do
>> not provide 'full'/2D/rectangular ROI capability but only 'vertical' ROIs
>> (i.e. you can specify from which to which row/scanline to read but whole
>> rows/scanlines are always read...) because the backends do not support
>> reading partial rows/scanlines so adding full/proper ROIs for those backends
>> would require emulation which in turn would cause code bloat/complication
>> and (possible) redundant data copying...For now it seems to me that the user
>> should handle 'full' ROI support in such cases (if required)...
>
> I try to allow all sizes of ROI. It can be anything from a single
> pixel to a scanline or a vertical line to a rectangle. This was bit**
> to implement for tiff's tiled images. ;-)
IMO, that's correct approach. If scanline-based reading is not
efficient, such image is a candidate for tiled tiff.
This would improve decompression as well, as tiles are compressed
independently.
Best regards,
-- Mateusz Loskot, http://mateusz.loskot.net Charter Member of OSGeo, http://osgeo.org Member of ACCU, http://accu.org
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk