Boost logo

Boost :

From: Maxim Shemanarev (mcseem_at_[hidden])
Date: 2002-05-07 22:05:57


> Am I correct in assuming that rendering is a parallelizable process?

Damn, yes!
I was concentrated mostly on algorithms, the architecture of the library
and the strategy issues, without getting into the design details (which are
also vitaly important). I didn't think about multithread rendering before,
but
even current architecture allows to parallelize the process...
Except the current design of the pipelines. Congratulations Douglas and
many thanks!
You found a really strong argument and your point is great.

The thing is I intuitively designed a rendering technology that
can be parallelized. I'll explain it.

The rendering with a single framebuffer requires 3 steps:

1. Decompositio the source structures into polygons. This is
   that very conversion pipeline.
2. Rasterization of the polygons.
3. "Sweeping" - generation a number of scanlines and superinposing
   them on the canvas (framebuffer).

The most expensive operation here is the rasterization and
maybe the pipeline (depending on its complexity).
Anyway, the first two steps can be easily parallelized with
the synchronization point in sweeping. The synchronizaton at
clause 3 is inevitabe, because:
1. It uses the same memory buffer for read/write.
2. The order of the rendered paths is important.

Sweeping usually takes less time than the first two steps, if you
do not use very complicated algorithms here, like image transformations
with color gradient.

But even in case when sweeping is expensive we could parallelize the
process using several framebuffers (if we can afford to allocate
more memory, nowadays we can) and then to superimpose the results.

> If it is, then the interface I gave is perhaps more suitable to
parallelization,
> because if numPaths() returns N paths, N threads could be spawned and the
i-th thread
> is given the vertex iterator range [begin(i), end(i)) to render.

Correct. The design of the pipelines definitely must be changed.
But in this situation, it's reasonable for me to issue the current
version as it is with a full set of docs, because I've already
took some obligations. I cannot work simultaneously on the current
version and refactor it. I indend to work sequentially. Besides,
this is gonna be a sort of a test variant which will help to understand
the concepts better and to invent new, more solid ones.

I think as soon as I write the docs, add some new algorithms I
will undertake the third iteration of complete refactoring
the library. And I indend to design it to be sufficient for BOOST.
There's no great hurry, and I'll focus on the quality rather than
on fast result.

McSeem
http://www.antigrain.com


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk