From: Vladimir Prus (ghost_at_[hidden])
Date: 2003-11-19 11:27:51
Stefan Seefeld wrote:
>> You know, there's one problem. If you run 'bjam' once for each test, it
>> can be slow in itself. At least for me, running 'bjam' in status
>> directory takes about 40 seconds, no matter if I request specific target
>> -- with Boost.Build v1. V2 might behave better, but still... I'm not sure
>> if that won't be too slow.
> Hmm, I don't know bjam very well, so I may be missing something here.
> I'd guess the time is spent mostly for constructing the dependency
> graph. I didn't mean to suggest that bjam has to be invoked multiple
> times. I'm sure bjam has its own idea of a 'task' / 'rule' or however it
> is called. All that would be needed is a definition of a task that
> invokes qmtest for a particular test, which then is invoked whenever
> bjam decides that its time to run it again.
Ah, in other words your propose opposite of what I've thought. Not using
QMTest to drive bjam invocations, but using bjam to driver qmtest
> Anyways, my whole point is to show that even though qmtest itself
> doesn't manage dependencies, it's easy enough to hook it up with
> tools that are designed to do just that.
Hm.. this resembles an idea that I once had -- tat regression system should
have some 'execution monitor' which is told 'compile that file' or 'run
that program'. It does that, recording interestring info like output, exit
status, spend time and used memory, or something else.
Using QMTest as such execution monitor might be good idea. But sure, there
are some problems. At least, you need to teach it to use Jamfiles as test
database. And second, it's not clear how to pass command lines to execute.
And third, it's not clear what to do if multiple toolsets are to be tests.
It might be still possible to reuse the test running/result storing
infrastructure, but IMO it's non-trivial project.
> Unsubscribe & other changes:
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk