From: Eugene Lazutkin (eugene_at_[hidden])
Date: 2004-11-30 16:05:14
"Aaron W. LaFramboise" <aaronrabiddog51_at_[hidden]> wrote in message
> Aleksey Chernoraenko wrote:
> > IMO, the best GUI "framework" would be a *set* of separate, specific
> > libraries each of which
> > would provide good abstractions/algorithms in some specific gui-related
> > area:
> I think that any new GUI interfaces created that fail to make these
> items a priority in their design will share the fate of all traditional
> user-interface frameworks, being inappropriate for modern design, and
> doomed to obsoletion.
Just to revive discussion about GUI allow me to offer my 2c worth. (Sorry
for big post).
Lets split GUI into two parts: G (2D graphics) and UI (user interface). :-)
G can be used without UI: printing, generation of metafiles (e.g.,
Postscript, SVG) or pixmaps/bitmaps. UI can be used without G, if only stock
components are used: simple dialog boxes.
Graphics requires following components to be implemented: canvas, graphics
primitives (shapes to describe geometric parameters, attributes to describe
visual parameters, clipping facilities), and 2D geometry.
User interface requires following components to be implemented: events,
windowing, layout facilities, and widgets/controls. Layout requires 2D
geometry. Widgets may require all listed components.
Now we want to make all these components as light-weight as possible for
performance reasons. Ideally they should be mapped 1-to-1 to facilities of
underlying platform, while preserving platform independence. I dont think
it is productive to walk original Java way and implement all widgets in
terms of graphics primitives, nor implement graphics primitives in term of
What do we need from graphics? There are specific requirements imposed by
1) Zooming. All users want to rescale a drawing, e.g. making it fit in a
window, to drill down to see some details, and so on. Mapping from absolute
units (e.g., inches) to device units (e.g., pixels) is a kind of zooming as
2) Scrolling. It is useful for big drawings and usually goes in hand with
zooming. UI provides special controls for that: scroll bars. Variation on
scrolling is panning, when user can drag picture with mouse.
3) Picture regeneration. In a world of overlapping windows we have to
support a way to regenerate only damaged part of window. Typically it is
done with clipping and associated regions. Scrolling and panning is another
source of picture regeneration, when part of picture is moved somehow, and
the rest should be regenerated.
First order of business is 2D geometry. Following stuff is required:
1) Point. Simple 2D entity with X and Y components. In real life at least
two versions are required: integral (screen coordinates are integral) and
floating point (for precise calculations). Depending of task conversion
between them may require rounding off, ceil/ or floor operations.
Additional consideration: it may be integrated with more general geometric
toolkit, which may implement 2D and 3D points as well as N-dimensional
Additional consideration: point should play well with native
2) Rectangle. Again, two versions are required. A bunch of algorithms should
be implemented: intersection of 2 rectangles, bounding rectangle of two (or
more) rectangles and/or points, subtraction of two rectangles, test if there
is intersection between two rectangle, test if point is within a rectangle,
and so on. These algorithms are required to support interactivity (mouse),
and efficient screen regeneration, which is crucial for complex pictures.
Additional consideration: rectangle should play well with native
platform-dependent rectangle. There is a slight problem here: two most
popular platforms implement rectangles differently. MS Windows keeps 2
points, while X-Window keeps point and size. From my experience 2 points
representation is needed more frequently.
Additional consideration: usually integral rectangle doesnt include its
left/bottom boundary. E.g., rectangle from (1,1) to (3,3) covers 4 pixels
only, not 9. This is convention used by MS Windows and X-Window.
3) Ordered collection of points (vector, list) to represent polylines and
4) Region. Most typically region is represented as (unordered) collection of
rectangles. It is used mostly for picture regeneration and complex clipping.
For performance reasons it is better to have such collection spatially
Additional consideration: region should play well with native
platform-dependent regions. Because region is relatively complex object,
which is used mostly in specific context, it may make sense to implement it
as a wrapper around native region. Facilities to extract/set clipping
from/to canvas should be provided.
Additional consideration: some native regions are implemented as collection
arbitrary geometric shapes, e.g. polygons. In my opinion this is rarely (if
ever) used. We should skip it.
5) Transformation matrix. In 2D case it can be implemented as 3x3 matrix or
(to conserve cycles and spaces) as 2x2 matrix + offset vector (6 values
total). Usually it doesnt make any sense to use integer matrix. Algorithms
to be implemented: addition, multiplication. It is beneficial to have
construction from offset vector, rotation angle, mapping from rectangle to
rectangle (with and without preserving aspect ratio) and so on. More complex
algorithms are practical as well like "zoom around given point".
Special implementation of matrix should be provided to enforce restrictions
automatically. Most useful restrictions are to ensure that any
transformations keep picture within scrolling boundaries. Another
potentially useful type of restrictions are putting boundaries on zoom
Additional consideration: it may be integrated with more general geometric
toolkit, which may implement generic matrices.
6) Vector as some kind of directional entity? Usually I use coordinate for
that for practical reasons. I dont think that there is a big practical
difference between vector and point in 2D space.
7) Size. See #6 above.
Graphics primitives are easy too. (I will use Windows terminology below).
1) Attributes: pen, brush. Pen is a line type, which allows specifying
color, thickness, and line pattern. Brush is a fill, which can be specified
as solid color, some kind of pattern, or bitmap/pixmap. Obviously pen makes
sense for outlines and brush makes sense for filled shapes.
2) Shapes: rectangle (efficiency!), polyline/polygon, Bezier curve, ellipse.
3) Text. Font plays role of attribute for text. Special strings should carry
characters to be shown, position of text, alignment, and formatting options.
Ill skip all multi-line formatting issues for now.
4) Markers. These are symbols anchored to 2D point. Frequently used in
charts, plotting, maps, and so on. Symbol (raster or vector image) plays
role of attribute. Usually more than one marker is required in a picture.
Unordered collection of points is a shape of some kind. Polyline can be
reused to describe such geometry.
Why do we need to separate attributes and shapes? In most complex computer
drawings number of different attributes is much smaller than number of
different shapes (think charts, maps, and so on). It means they can be
cached to improve performance. In some cases it is even possible to sort
shapes by attributes to group objects sharing the same attribute together.
Shapes have another thing in common: bounding rectangle, which can be used
for spatial separation to improve speed of picture regeneration.
And finally canvas. This is essentially Windows DC, X-Window GC, file stream
(for metafiles like Postscript or SVG), and so on. Facilities to create
canvas of required type should be provided. Some portable way to extract
canvas from window should be provided as well.
Now about some implementation points.
1) Points and rectangles should be implemented platform-independently. It
allows achieving predictable performance results on all platforms.
Additionally it is easier to use binary input/output to transfer them
between platforms because layout is standardized.
E.g., once I implemented rectangles like this (pseudo code):
class Rect : public RectTrait<T>::layout
typedef RectTrait<T>::component_type component_type;
// and so on
On Windows I used Rect<int> and Rect<double> for calculations and Rect<RECT>
for platform-dependent interface. Rect<RECT> was based on RECT and was
binary compatible with all native functions. Anyway this stuff was hidden
2) There are two drawing modes: immediate and delayed. MS Windows and
X-Window emulate immediate mode. I prefer delayed mode. It allows to
accumulate (cache) all graphics primitives in some kind of intermediate
facility (in-memory spatial database of sort), which can be used to automate
picture regeneration, zooming, and scrolling. This solution allows for great
simplification of drawing part: just dump primitives in correct order and be
done with it: no complex checks of visibility in your code, no output of
Even bigger benefit: it allows automating object selection. When user
clicked a mouse (or selected a rectangle), we can set neighborhood of the
point (or selected rectangle) as a clipping area and "draw" picture in
reverse stacking order. In this case we know what object was visible (on
top), or what objects are inside our selection.
In order to facilitate selection objects may have handles. Absence of handle
(or some predefined handle) means that object is not selectable (e.g.,
background rectangle). Some objects may have the same handle, which can be
used to signify that all of them are parts of some high level object.
Some facilities to modify cached graphics objects should be provided. Such
change would generate an update request for picture regeneration with proper
bounding rectangle. For efficiency reason, before massive updates an update
request may be prohibited. In this case, after update, one single update
request (e.g., redraw everything) may be generated.
In order to support extra big pictures we should define some protocol so
intermediate database can request additional graphics data. One possible
implementation is to support notion of pages, which are defined as
rectangular containers of certain size.
3) Double buffer should be implemented for windows. It plays two roles:
eliminates flicker, and reduces CPU cycles spent on picture regeneration.
Basically, if canvas has associated buffer, all picture damage from
overlapping windows is fixed using copy from buffer. Only scrolling,
zooming, and internal damage would require (partial) data regeneration from
What do we need from user interface?
Graphics can be easily abstracted without big loss of performance. Most
common graphics tasks can be automated and done once. The reason is quite
simple: foundation is pretty much universal. Unfortunately it is not the
case with user interface. All OS vendors regard UI as a major differentiator
between platforms. Different look and feel is the minor part of it. Standard
set of controls/widgets is different across platforms. Different conventions
are used to implement similar things. E.g., hot keys are different, menu
layout is different, mouse buttons and click patterns can be different, and
so on. Localization rules complicate everything.
Introduction of "special look and feel" is not viable. Just remember
unsuccessful Java efforts. Users of respective platforms prefer applications
with native look and feel.
Given all that it looks like the practical way is to provide a declarative
way to describe user interface. (XUL, XAML?) We should be able to combine
components (widgets/controls) on 2D surface, define event handling, and
layout properties. Widgets/controls can be taken from predefined set of
library components, which are mapped to native controls, if possible, or can
be custom components.
This description of UI can be internal (in the program) or external (e.g.,
in file using some format). External declaration can be replaced without
recompiling a program. It simplifies localization, and minor (mostly visual)
tweaks. Downside is possible mismatch with program code.
Each UI object should carry a list of properties, which can be used by
layout engine (replaceable component itself) to modify geometry of component
depending on window size. This facility should be used for initial layout to
take care of different DPI of physical device, and for potential resizing.
Some required properties are obvious like size and position of component in
some units in some reference coordinate system, and so on. Some properties
can be more elaborate like glue in TeX (elasticity, preferred size), which
will govern transformation and position of component in different
In order to implement all this we need an engine, which will interpret UI
description according to platform-specific rules, and work as intermediary
for event processing code, and, possibly, custom painting. Obviously this
engine should be customizable with replaceable components.
Sketch above is not sufficiently detailed. For example it doesnt define how
to implement custom component. But I think it gives a preview of what can be
done with this approach: true multi-platform support, simplified creation of
UI (instantiate user interface from description), simplified painting (in
most cases just dump graphics primitives once), simplified selection (get
list of selected objects or top object), simplified UI refresh of visuals
(just modify description), virtually no-code zooming/panning, unified
printing (including metafiles and raster images), and so on.
Now if you look up you will see that this is a big ambitious project. I am
not sure it should be done under Boost umbrella. I am not sure we have
available bandwidth to do it. If you think otherwise, please give me your
Of course, it can be downscaled. For example, 2D geometry is universally
useful. Even if you dont want to go multi-platform (most popular choice of
developers) you can still find some use for it. Graphics-heavy applications
can use G part implementing UI separately for 1-2 platforms they want to
support. Simplified UI part can be used to generate simple dialog boxes for
different platforms. Different C++ UI bindings can be created for different
platforms. It is not multiplatform way, but it is still would be better than
most popular "toolkits".
Well, what are your thoughts? Discussions of UI are recurrent event in
Boost. I saw several proposals for 2D geometry in this mail list. Authors of
several GUI toolkits are frequent visitors here. Are we ripe already?
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk