Boost logo

Boost :

Subject: Re: [boost] [spirit] semantic action for mismatches?
From: caustik (caustik_at_[hidden])
Date: 2011-01-10 11:42:32


On Mon, Jan 10, 2011 at 8:35 AM, Joel de Guzman
<joel_at_[hidden]>wrote:

> On 1/11/2011 12:15 AM, caustik wrote:
>
>> On Mon, Jan 10, 2011 at 5:02 AM, Stewart, Robert<Robert.Stewart_at_[hidden]
>> >wrote:
>>
>> Joel de Guzman wrote:
>>>
>>>> On 1/10/2011 10:03 AM, caustik wrote:
>>>>
>>>> I'm also curious what the difference performance
>>>>> characteristics are (Spirit / Xpressive).
>>>>>
>>>>
>>>> I'm not quite keen on apples-oranges comparisons. Both xpressive and
>>>> Spirit have their place. That said, I am not aware of any
>>>> formal benchmarks,
>>>> but there's an informal one posted by Overmind on the Boost
>>>> users list:
>>>>
>>>> http://lists.boost.org/Archives/boost/2009/07/153899.php
>>>>
>>>> With that simple test, Spirit beats highly optimized xpressive
>>>> (1.5 secs vs. 9 secs). You might want to read the whole thread.
>>>>
>>>
>>> What's disappointing is that after I did, finally, post real code, that
>>> thread fizzled.
>>>
>>>
>>> It would be a nice addition to the documentation to have some thorough
>> benchmark tests. Any particular ideas on what good tests would be? Maybe
>> even just a note in the FAQs with a suggestion on how tests could be run,
>> and asking for users to contribute, would yield some help from others. I
>> would imagine that there have been a few users who ran their own internal
>> tests before choosing a solution, if they were made aware of the interest
>> in
>> gathering those results, they may volunteer that data.
>>
>
> I don't agree.
>
> If it were Spirit vs. other parsers (yacc, ANTLR, etc.), that would be
> meaningful, yes. But if it is spirit vs. xpressive, it will be an
> apple-orange comparison. There are things that are best suited for
> xpressive that spirit can't do and the same the other way around.
> These are different tools with some overlap. Real world uses for both
> tools go beyond this common overlap. Benchmarks that test the common
> denominator are at best for entertainment only. Would you write a
> compiler with xpressive? I don't think so. Would you do search and
> replace with Spirit? Nah.
>
>
Right, I didn't intend to imply the performance measurements should
specifically target Spirit vs Xpressive.

It's valuable to know how the different "apples" perform in relation to one
another, but it's also valuable just to have some absolute measurements just
to have some idea how much time your code is going to spend executing the
grammar. Of course that's machine dependent, but any modern desktop platform
will give at least a rough order of magnitude. For my use case, for example,
the grammar will be executed across potentially dozens or more machines in a
map-reduce operation, and being able to plan ahead where my performance
bottlenecks are going to be is really useful.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk