Boost logo

Boost Users :

From: Felipe Magno de Almeida (felipe.m.almeida_at_[hidden])
Date: 2008-07-12 04:10:57


On Thu, Jul 10, 2008 at 5:34 PM, John Femiani <JOHN.FEMIANI_at_[hidden]> wrote:
>
>> Ok. But this transformation only occurs when different set of
>> coordinates are being used right? Why should the
>> transformation be part of the surface concept?
>
> Myabe you are right; It is just that's the way the existing API's I have
> seen do it. I think that is because the transfomation in done in
> hardware.

I don't want to prohibit transformations made by hardware.
But I also want a straightforward surface concept.
Do you think it is possible to create one that has both?

>> > I am thinking:
>> >
>> > image_surface canvas(m_device, my_image);
>>
>> I don't get why pass a image, the device should be enough.
>>
>> > canvas.push_transform(rotates(3));
>>
>> Shouldn't the rotation be done on the coordinates?
>> Aren't this overloading the surface abstraction?
>
> Not if the transform is done on the graphics card -- in that case the
> current transform is part of the card's state.
>
>> > ...
>> > ASSERT(!canvas.locked());
>> > image_surface::buffer_image buf(canvas, 0, 0, w, h);
>> > ASSERT(canvas.locked());
>>
>> Can't this just be another projection from one surface to another?
>> Where the destination surface would be an image?
>> It should accomplish the same thing, wouldn't it?
>
> Well, I am imagining that you want to operate on the image in memory
> (using gil), in order to do something on the pixels that you cant do
> through the surface.

Got it. I thought you just wanted to read it.

> I was trying to show how that might work, the
> 'LockBits'/'UnLockBits' approach comes from a long time ago when I
> played with CImage or Cbitmap I think (microsoft gdiplus IIRC, maybe
> .NET). Anyhow Java also has 'BufferedImage'. I think that the surface
> API should provide a way to do the same. It _might_ involve copying, or
> it might not.

Can't we just project it to a GIL surface, which also models the Image
View concept?
And if the current surface you want to use gives access to its
pixels, then it can also model Image View concept.
I don't think we should overload the concept with access
to optional features.
What do you think?

> For instance, if the surface just uses software (no GPU) to render to
> the image passed in the constructor, buffered_image might just be a
> proxy of the original image.(or something)

I see, I don't have strong opinions, but that seems to make it more
difficult than necessary to implement the surface concept.
We can instead make the project operation very fast instead.

>> > This way the canvas can provide drawing operations, and also a
>> > buffer_image type.
>>
>> How about my projection suggestion?
>
> I think that 'project' is different than what I am proposing - project
> transfers from one surface to another. The buffer is supposed to be
> something along the lines of LockBits/UnlockBits. I was hoping that an
> RAII approach through 'buffered_image' would make sure that what ever
> changes you made to the buffered images were copied back in.

It can be done simply by also modeling the Image View concept.
It would be needed to know the type before trying it, or use
SFINAE in client code to create really generic code.
But it makes the surface concept much more simple and
coherent IMO.

[snip]

> If they aren't, then you don't need a buffered_image. Maybe that is the
> case you were thinking of for 'project'?

Yes.

>> > Here is another approach: You could just try a simple scene graph:
>> >
>> > scene_graph g;
>> > rot3 = g.push(rotate(3*degrees()))
>> > line = rot3.push(line(0,0, 100, 100))
>> > box = rot3.push(rectangle(10, 10, 20, 10))
>>

[snip]

> The second approach is to avoid using a 'surface' that keeps state about
> the transform etc, and instead explicitly store the transform, as well
> as colors etc. as part of the scene to be drawn. The idea is that you
> can provide a very simple scene graph
> (http://en.wikipedia.org/wiki/Scene_graph), which can then be 'rendered'
> or 'flattened' to either a rgb_view_t, or an OpenGL Rendering Context,
> or a CDC, or an SVG file.

Got it now. Can't it be done with the normal surface concept?
Instead of rendering directly by the surface, it would just stack
the operations and render it by something else.

> That approach is extremely flexible, and it does not require a 'surface'
> with 'state' to be a part of the public API. The state can be managed in
> a scene_graph_visitor that is responsible for the final rendering.

I believe that can be done with the surface concept I have in mind.
And using the same syntax as for everything else.
Any ideas?

> --John

Regards,

-- 
Felipe Magno de Almeida

Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net