Boost logo

Boost :

From: Reece Dunn (msclrhd_at_[hidden])
Date: 2004-12-22 05:47:47


Hi All,

A GUI library can be thought of as having two distinct, but related,
units: components (GUI widgets) and graphical objects/operations.

A GUI component consists of data and events (e.g. a push button has a
title and an onpressed() event). The question is: how do we represent
the graphics unit?

There are clearly two views: high level and low level. You can represent
a graphics library as a series of objects (lines, circles, polygons,
images, text, etc.) each of which having a set of properties (color,
background/fill color, etc.). This is a high-level view and is based on
the vector graphics model.

Consider you are writing an application that draws a fractal object on
the screen such as a Hilbert curve. You decide to represent it as a list
of line objects. What happens when you redraw the curve? For large
numbers of lines (easy when you have a fractal depth of 5+), the
redrawing process is going to be slowed down.

This is because you will be changing the colour on each line drawing
operation. This becomes more apparent if you are using images to fill
polygons and are using the same fill pattern across several objects.

There are various graphics primitives (font, pen, fill (i.e. brush),
etc.). Each primitive operates on a canvas and some can be used to query
information on a device (e.g. the area occupied by a text string). In
Windows a device and canvas are represented the same (as a HDC), but it
makes sense to distinguish between them: you draw to a canvas and query
information about a device.

These graphical primitives should form the core of the graphics unit,
with the high-level graphical objects being viewed as an extended unit.

With the discussion on using CSS and Andy Little's comment about what
difference is there between GUI components and graphical objects in
terms of them both being objects that have position information and are
drawn onto a canvas, it makes sense to allow both types to share a
common interface.

The main question is really what happens on the redraw phase? Ideally,
components should not be drawn on the ondraw event, but rather be
controlled by the behaviour of the operating system. For owner drawn
components, do we handle the drawing event here or on the request of the
OS (I think the latter is best).

What about lightweight components (i.e. components that do not take up
OS resources w.r.t. component representation, but that do support an
event model)? Should we treat lightweight controls as graphical objects
with an event model? Do graphical objects support an event model? Should
graphical objects be treated as lightweight components?

Regards,
Reece


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk