|
Boost : |
From: David Abrahams (dave_at_[hidden])
Date: 2003-01-15 06:10:27
"Vesa Karvonen" <vesa_karvonen_at_[hidden]> writes:
> David Abrahams:
>>Vesa Karvonen:
>> > BTW, the same "acceleration" technique can be used with templates to
>> > overcome template recursion depth limitations. Probably most people
>> > know about it already, though I haven't explicitly checked. > (Plain unrolling is not as effective:
>> Theta(k*n) < Theta(pow(k,n)).)
>>
>>Well, if I understand your suggestion correctly, it's one thing I
>>thought of. Basically you're doing "tree unrolling" instead of
>>"linear unrolling", right? In the TMP case you don't have the
>>advantage of auto-detecting the depth limit, though ;-)
>
> You are right, top-down "tree unrolling" isn't probably
> practical. However, I think that it should be possible to do the
> "tree unrolling" in a bottom-up style rather than top-down
> fashion. In other words, the "iteration tree" would look something
> like this:
>
> 0 -> 1 -> 4 -> ...
> / \ / \
> 2 3 5 8
> / \ / \
> 6 7 9 10
>
> Above, each node represents a single iteration and the numbering
> represents the order in which the iterations would be done. (The
> numbering might also look different - take it as a hint rather than
> the only/correct/best choice.) The idea is that from one "top-level"
> iteration to another, the template would try iterating bigger and
> bigger [sub-]trees. I really seriously Do Not have the time to [try
> to] implement such a while_<pred,op,state> template, but out of
> interest I probably will, unless someone else demonstrates it
> earlier.
That's very interesting! It guarantees that you'd run out of
recursion depth no sooner than you would otherwise, and it exhibits
the same kind of exponential growth factor that's used to keep
std::vector<>::push_back amortized constant-time. It does mean that
we'd have to pass an additional recursion depth parameter around, but
it's unclear whether that would have an adverse effect. I think it
should be possible to avoid doing any extra template instantiations as
well.
>> > IMO, it is not a very good argument to say that more powerful tools
>> > are unnecessary
>>I am not making that argument.
>
> Sorry, I guess my interpretation of your argument was too quick.
I wasn't really making an argument at all. But I will repeat my
urging that you and Paul write that cover page before embarking on any
new features. Feel free to use my sketch as a template ;-)
>>Ever thought about writing Yacc with just the preprocessor (<wink>, I
>>think)?
>
> I have been thinking about making a lexical analyzer generator
> (using a syntax for regular expressions and using the preprocessor
> or run-time (initialization) code to generate the transition tables
> (once)), which is a lot easier to implement than a parser
> generator. I'd rather start with a simpler task, as getting
> something nearly as complicated done with the preprocessor is enough
> of a challenge.
It also might have bigger bang-for-the-buck: since lexing is typically
the most time-critical part of a parser, most real lexers are coded by
hand without any tables.
-- David Abrahams dave_at_[hidden] * http://www.boost-consulting.com Boost support, enhancements, training, and commercial distribution
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk