From: Maxim Shemanarev (mcseem_at_[hidden])
Date: 2002-05-10 09:03:35
> So how are you coping with floating point coordinates, assuming AGG
I simply convert them to integers before the rasterization:
polyfill.move_to(int(x * 256.0), int(y * 256.0));
where x and y - FP pixel coordinetes.
> Does this mean that resolution in the anti-aliassing is lost in the
Of course it's lost. It's rounded off as shown above. But this step
is performed directly before the rasterization procedure only once.
All the pipeline works in FP, so the inaccuracy does not accumulate.
The resolution of 1/256 pixel is more than enough for _visually_ precise
rendering. In fact, I cannot see any difference between 1/16 and 1/256
of a pixel accuracy when rendering polygons. I took 256 just because it
corresponds well with possible 256 levels of anti-aliasing.
> After all, some of the possible 256 levels will be "corrected"
> away no matter what. If you have an internal representation of
> more than 256 levels, this wouldn't be problematic, though.
All these values can be adjusted, but in this case we'll have to make
some corrections in the scanline container, because pixel coverage
values are stored as bytes now.
> I'm not that well-versed on the details of line-drawing algorithms, mostly
> using Bresenham or my own algorithms but apparently, the fastest way to
> a line is by using a finite state machine. The Amiga (remember that one :)
> apparently uses this technique. Then again, in the world of floating
> a recursive interpolation would probably be fastest.
Well, it was a great flame war in comp.graphics.algorithms about
the World Fastest Line Drawing algorithm:
In AGG it actually is not my major concern. Why? Because the calculation
of the pixel coverage values requires at least one integer division
per line step. It's inevitable and it's expensive. So, the iterating
itself it not a bottle neck. I worked on it in the previous version of AGG:
There is a line drawing algorithm that can render anti-aliased lines
of different width (although the maximal width is restricted). In certain
cases it gives better quality, But the great disadvantage of this method
is that it cannot calculate the value of the pixel if it's being crossed
times by different edges and as a result there're many defects appear
when rendering small images.
The idea of new algorithm I took from rendering glyphs, namely from
David Turner's library FreeType (www.freetype.org), so, there're no
such primitives as line segments and the performance of the
interpolation part of the algorithm becomes even less important.
> > > 5. Is 16bit-per-channel supported or, any other non-8bpc mode.
> > Yes. Although it's not implemented now, I plan to do that. Actually,
> > any mode is considered as a colorspace and all you need is to implement
> > a very simple scanline renderer class.
> Considering the fact that you're using integers internally, what is the
> maximum colordepth supported?
Good question :-)
It depends on the fact whether you can afford to use 64-bit arithmetic
to calculate mixed color values. If you can it's up to 16 bits per channel
and it's only because the color structure uses 16-bit values for color
components. If you use only 32 bits integers, it's less, but, the
internal representation of, say, 10Red, 12Green, 10Blue can be
implemented. It really is a good questions, because:
1. The majority of applications does not require using more than
2. There's a number of a sort of "Hi-End" applications where it's
important to use more than 8 bits. For example, when superimposing
many layers on one canvas with different alpha-mixing values, the
canvas accumulates errors and they can be great when using 8-bit
representation. But in this case we can sacrifice the performance
for the sake of quality.
Another concern is that I use only 256 level anti-aliasing. I'm not sure
it's enough for all cases and this is an issue for the next version
of AGG. Now it's restricted by the representation of pixel
coverage values in the scanline container. I probably will use
a template class for it.
> This really is only of importance when you're using floating point
> coordinates in a getpixel() or, indeed, in image transformation (i.e.
> rotation or scaling of entire bitmaps). There's a lot of different
> interpolation algorithms all with their respective benefits (quality) and
> penalties (performance). I'd love for AGG to allow some sort of callback
> let developers implement their own interpolation algorithm.
Good idea. I'll redesign some classes (they're not released yet) in order
to have this possibility. At least it'll be a temletized callback in the low
level (which doesn't exclude the use of polymorphic classes).
> > Well, afaik, fixels and wu-pixels are completely diffedrent things. Am I
> > wrong?
> > Wu pixels are not supported directly, but the rendering algorithm uses a
> > very similar idea: http://www.antigrain.com/img/subpixel_accuracy1.gif
> They pretty much boil down to the same thing and are used interchangeably
> all over the internet.
Still, I'd prefer to use term wu-pixels, because in my mind fixels
are associated with fixturing engineering tasks :-)
> Once again, these are only usefull when you've got a
> putpixel() with floating point coordinates.
It's also a good question. It's possible to render wu-pixels of any
shape using the common AGG rendering approach. But the
performance of it is relatively low. The only interface function
that the rasterizer requires is a sort of render_scanline().
I'm thinking about extending this interface in order to eliminate
the overhead in some particular cases such as putpixel().
> Perhaps you could implement a callback, simple one which passes a color
> and returns the corrected color, or for the gamut detection, a boolean
The general approach in AGG is that it's an open library.
It means that if I add all the possibilities in one rendering class
it will seriously affect it's performance, so, I prefered to start
with simple and obvious things. But it does not excude
the possibility to write more complicated templetized
classes with all possible callbacks, and a lot of internal
> This is a thing a lot of people get wrong. It's quite a mathematical
> and I only know the formula for a reasonable approximation but this is a
> >very< important issue.
Again, it's an issue of quality vs performance. I don't claim my approach
in mixing is good, but I'd say it's _appropriate_ for many applications.
I agree in cases of high qiality image composing it'll give bad result,
but you can create your own rendering class. All of it will be desribed
> > > 12. Can the channel mixer be customized to allow for additive,
> > > multiplicative and other mixing modes?
Actually it's the same issue - the issue of scanline rederer class
> I meant
> things like desktop publishing software and other applications whose sole
> purpose is to render those primitives.
IMO this kind of applications is most developed. And as I see you're an
expert in it. Is http://www.v-d-l.com yours?
My orientation is different - it's basically engineering tasks. The amazing
thing is that there lots of research in publishing, in 3D graphics, and so
but somewhere in the middle, there's absolute emptyness. I mean all the
scientists and engineers use obsolete rough graphics.
All the questions you arose are great because they make me
think more about the design, the approaches, and the algorithms
and finally to issue a high quality product.
> As for complaints about support for specific hardware architectures,
> you may
> have noted I did not mention these since I wholeheartedly agree with
> raw data only.
When the high quality anti-aliased rendering is standardized in
hardware it'll be possible to use underlying APIs. But I don't believe
it'll happen soon. All I see now gigantic efforts of such
companies as nVidia, ATI, and others with quite miserable result.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk