|
Boost : |
From: David Abrahams (dave_at_[hidden])
Date: 2003-12-01 09:57:02
misha_at_[hidden] writes:
> David Abrahams <dave_at_[hidden]> writes:
>
>> Misha Bergal <mbergal_at_[hidden]> writes:
>>
>>> David Abrahams <dave_at_[hidden]> writes:
>>>
>>>> How do I note that certain tests are expected to fail on a particular
>>>> compiler?
>>>>
>>>
>>> To annotate failures in metacomm reports edit file
>>> status/explicit-failures-markup.xml. I think the format is
>>> self-explanatory.
>>
>> OK, thanks!
>
> Looking at the changes you've made to explicit-failures-markup.xml:
>
> 1. Cool!
> 2. You used toolset="*", probably because:
>
> * it was too painful to replicate the comment to different
> toolsets
>
> * you just didn't know which toolsets to specify (Beman and
> we use different toolsets for testing)
No, it's because those tests failing is indicative of unimplimented
SFINAE. When they fail, the same comment really does apply in all
cases.
> We are aware of these issues.
>
> Regarding the first one, it was a conscious decision to do it.
Sorry, to "do what"? Make it painful to replicate the comment?
> I didn't have any idea on how often the same comment is
> applicable to different toolsets and didn't want to create
> something which is not goging to be useful. I guess, if there
> will be more situation like yours we will have to add support to
> more flexible mark definition.
?? I thought tollset="*" is supported already. Last night's tables
seem to reflect that belief.
> Regarding the second one, I guess some work needs to be done in
> unifying toolsets (toolset names) used for regression testing
> (like http://tinyurl.com/x72s)
Probably a good idea.
>> I think it might be helpful for users if there were a color which
>> indicates degraded, but still-mostly-working, functionality. Sort of,
>> "this library basically works, but see the failure log for corner
>> cases", as opposed to "we expect this to fail and it's been reported
>> to the compiler vendor, but it mostly makes the library unusable".
>> I'm not sure if we have any of the latter type of failure, so if we
>> don't I guess there's not much value in adding a color. I also
>> realize it's a qualitative judgement which you could argue we should
>> let users make. It seems to me, however, that a failure of things
>> like is_convertible_fail_test in the iterators library cause only a
>> tiny degradation in capability ought to be distinguished from more
>> serious problems.
>
> So we need to show the user:
>
> * if the library is completely unusable for particular platform.
>
> * may be because of not capable compiler
> * may be because of it not having been ported to the compiler
> * may be because of it not having been ported to the platform (Mac OS X)
>
> * how significant the particular test (or failure of it) is from the
> library author point of view.
Yeah. We have a number of less-significant tests for corner cases in
the type traits library, for example.
> The first needs to be shown on the summary report, the second - on the
> detailed report. I guess we need to think about the good visual
> representation - I believe that the current visual representation is
> simple and powerful and right now I would not trade it's simplicity
> for more features.
Agreed.
> If we can come up and agree on a good visual representation and unify
> the toolsets used for regressions - there are no technical problems
> implementing it.
>
> Summary report is easy: if the library is explicitly marked up as
> unusable for platform/compiler, then it's status is displayed in
> black.
>
> Detailed report might show the corner case tests with different
> background.
>
> I will modify the HTML with current results to reflect the changes I
> propose and will post the link to it - it would be easier to discuss
> the issue if we have the something to look at.
Great!
-- Dave Abrahams Boost Consulting www.boost-consulting.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk