From: Stefan Seefeld (seefeld_at_[hidden])
Date: 2005-09-30 10:02:27
Given the recurring interest this topic incites, I think it
is important to start by clarifying what people really mean
by 'C++ GUI'.
For the more practically-minded, it is a means to portably
and easily write Modern C++ (TM) applications that use a
graphical interface, while for others it is an occasion to
Do It Right This Time (TM).
Can they both be satisfied with the same API ? I believe
this should be possible by making educated use of abstraction
I'm a little reminded of the discussion around a socket /
network / asyncio API, as I believe the situation is similar.
This is because the methodological challenge is the same: what
is needed is not a single monolithic API but in fact a set of
interdependent APIs, and it is hard to draw a clear line between
Therefor, I'd suggest we split 'GUI' into distinct modules
and try to model each individually, keeping in mind that
a) they all have to be able to collaborate and
b) each may be implemented either in terms of a 'native' backend
as well as in a OS-independent way.
To get to the meat, here are the domains I think should be
* Imaging Model
This module is concerned with low-level drawing primitives (2D
at first, but may be extended into 3D). The associated vocabulary
includes canvas, painter, graphic context ('GC'), path, pencil,
etc., etc., you get the idea.
* Event / Messaging Mechanism
This module provides whatever it takes for individual parts
of an application to exchange messages, both, graphical as
well as other. Designing this module is particularly challenging
if this is to be generic as some backends are rather constraining
in terms of message types that can be sent / received.
This module should be able to provide a uniform view on both
input events ('button click', 'pointer move'), region management
('expose', 'resize'), as well as high level widget-related
* Region ('Window') Management
This module is concerned with providing users with regions
to draw to and receive events for, independently on what these regions
actually contain. The associated vocabulary includes region, map / unmap,
resize, stack, layout.
While people may be inclined to think of these regions more in terms
of windows, or even widgets, I believe there is good reason to keep
these two concepts separate, even though in most cases there may be
a one-to-one mapping between the two.
(Imagine a situation where windows (which typically are screen-aligned
rectangular regions) are not enough to represent regions, i.e. where
you want to track stacked *shaped* graphics.)
This is probably what most developers think of as 'GUI'. It's all
those building blocks that have a certain style, behavior, and associated
state, with more or less complex logic to keep 'view' (and possibly 'controller')
separate from the 'model'.
Again, I suggest this separation in the hope that it will facilitate
and focus further discussion. I believe all existing GUIs support
the above concepts, even though they may not be as clearly separate
as I suggest here. In particular, depending on the architecture some
of these aspects might be hidden, for example if the work is split
into a display server and a client, where users only program the client
As prior discussions suggest there are widely differing views on who
should be in control of styling. Using different backends (or policies)
developers / users can have more or less fine-grained control over
these issues. A simple implementation of the high level APIs would be a
slim wrapper around existing libraries, while for fine-grained control
more work is needed.
It would thus be an interesting exercise to see whether existing GUIs
can be mapped back to the above modules I suggest.