From: Reece Dunn (msclrhd_at_[hidden])
Date: 2004-12-26 05:46:24
Alan Gutierrez wrote:
> * Alan Gutierrez <alan-boost_at_[hidden]> [2004-12-25 07:50]:
>>* Reece Dunn <msclrhd_at_[hidden]> [2004-12-25 07:03]:
>>>Alan Gutierrez wrote:
>>>> The Revised Taxonomy
>>>> The Form, The Grid, The Document, & The Canvas.
>>>I think that trying to work out the taxonomy of user interfaces is the
>>>wrong way to go. In general, it is not clear how these break down.
>>>For example, an application will usually have a main interface frame,
>>>such as the web browser. In this case the taxonomy is the Document to
>>>use your terminology. The application may also provide a set of options
>>>that the user can configure, which is The Form.
>> Perfect break down.
Let me clarify.
I view the grid and table as a specific data-bound components/widgets
that makes requests for data, i.e. row/column information and cell
queries. This will fit in with other advanced UI components such as the
The way that I am approaching components is to define a specific
data+event relationship, for example:
* textfield = std::string text; event::ondatachanged();
* push-button = const std::string title; event::onpressed();
* button-group = event::onselectionchanged( long sel );
* calendar = boost::gregorian::date date; event::ondatachanged();
This allows you to bind specific actions to a data component and to
query its content.
I view a form as a specific type of frame. A frame is a component that
supports a frame and will contain other components. Examples include:
* main-frame - the top-level application frame with a close button,
* form - a frame that has a layout specified by an external resource
located by an ID, e.g. Dialogs in Windows and NIBs in Mac.
* popup - a popup frame that may not have border decoration. This is
used, for example, to host controls beneath a button when the button is
pressed (such as a calendar).
The Document and Canvas to me are specific renderers that are used to
render content on a frames usable area via a graphics canvas. We should
not be restrictive on what type of renderers we allow, because you will
need a different DOM if you are rendering HTML content (using the W3C
DOM), a text editor (using a custom text DOM), a PDF document (using the
PDF specification), an SVG application (using either a custom
representation, an array of high-level graphics objects or the W3C SVG
DOM). What I am trying to say is: provide the framework for rendering
content and the interaction between the frame and the graphics canvas,
then let the application writer construct the document model that they need.
>> I'd say that the framing, tabs, splitters, and the like, are
>> part of content area management. I'm intereseted in providing
>> application developers with building blocks for the content
>> area, and more than wrappers around common controls and graphics
I am not suggesting a low-level wrapping around the various widgets.
What I am suggesting is that we focus on the type of *data* that a
widget supplies and support that in the widget implementation. This base
data (e.g. std::string in a textfield) can be converted to the required
type (e.g. a long) by the programmer when they write the application.
If you require data binding, it should be easy to write this on top of
the GUI framework, but data binding should be seen as an extension.
> Maybe Taxonomy was too portentious. I don't want to get bogged
> down defending a "Taxonomy", when I'm trying to learn bjam and
> Boost.Build in the other window.
> Content building blocks, I'd like to have 'em.
Sure, but isn't this what components/widgets are? I know that they don't
fit in with providing HTML or PDF content, but this is
application-specific data rendering. There are commonly supported
content renderers (simple text and rich text editors) that should be
provided by the GUI library, but then you get to the issue of whether
this is cross-platform and do you also supply HTML+CSS content?
> I'd like to talk about them. In addition to the dialog boxes,
> the tabs and splitters, the graphics primititives, can a
> Boost.GUI provide some robust strategies for the client area of
> the application? This is really the discussion like like to have.
This is really about content rendering models, document/view
architecture, MVP architecture, etc.
> Widgets + layouts hit a ceiling pretty quick.
> I'm really impressed with Thuderbird and Firebird, where they've
> used their XML + CSS renderer, the content renderer, to do
> pretty much everything else, wigets, splitters, dialgos. I think
> that shows a lot of forsight.
There are a few issues with providing an XML+CSS layout renderer like
* The implementation would require an XML and CSS parser as well as the
ability to draw the content to the screen (to support the CSS side of
the proposal) - this would make the resulting code base exceptionally large;
* Thunderbird/Firefox use this approach because of the nature of the
application - they already have a very good XML+CSS parser and content
renderer as part of the web browser;
* Do we implement the renderer ourselves (expensive in terms of code
size, effort and tracking down bugs) or use an external renderer like
Geko (Mozilla) where we have licensing issues;
* How do you provide the UI, i.e. the event binding and event flow?
* What about people who want/need/require native interoperability?
* How do you use native controls to get a native L&F?
* What about other issues inbuilt to an operating system?
* How do you interact with screen readers for blind users?
In my opinion, the Boost.GUI library should be as lean as possible, but
provide the framework that allows for XML+CSS applications, skinned
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk