From: David Abrahams (david.abrahams_at_[hidden])
Date: 2002-04-17 23:33:09
----- Original Message -----
From: "Andrei Alexandrescu" <andrewalex_at_[hidden]>
> I understand. My conjecture is that for this order of magnitude (and
> even for one order of magnitude up), a compiler should be fast enough.
> After all, my ML, Haskell and Scheme programs were uniformly blazingly
> fast when working with such input size :o).
> I wish a compiler implementer could chime in with a qualified opinion.
I've talked to some compiler implementors, and some people who've looked
at the template instantiation code of various popular compilers. So, I'm
not an implementor, but I play one on TV ;-).
Unfortunately, some of our most capable compilers (the EDG ones), are
incredibly slow at template instantiation for the craziest reasons. And
some vendors (Intel) have added bugs on top of that which make the
front-end even slower! So, while we all agree they /should/ be blazingly
fast like CodeWarrior, they are not always. I've reported these problems
and newer versions are getting the bugs fixed, however, that's no help
for the vast number of existing installations. Even GCC, when working on
tougher problems, can be really slow.
Also, I don't think that simple traversals of 50 element lists really
begins to describe the potential complexity of metaprograms. When you
start designing new domain-specific languages, it's easy to create
scenarios where a lot of non-trivial processing happens.
> By and large, my belief is that new, worthwhile programming idioms
> ought to influence compiler implementations. The opposite (new idioms
> that specifically address idiosyncrasies and inefficiencies in
> compiler implementations) is often the case, though.
Sadly, we need both in order to make progress. New idioms often can't
apply the pressure they should unless they've been ported enough places
to be popular.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk