Re: Stable release testing information tracking

Paul Albertella

Hi Shuah,

This kind of information is a good example of what I was talking about in my next steps post: 'evidence' that might support a claim that existing Linux processes satisfy some 'criteria' from our Reference process, but which might need to be extended or enriched with additional 'metadata' to aid in the gathering and correlation of that evidence

As Lukas notes, the presence of a 'Tested by' signature is encouraging, but without some way to link this to a more details of the verification actions that were performed, it is of limited value.

You wrote:
For this information, you have to look at the stable release threads.
Forgive my ignorance, but what do you mean by 'stable release threads' in this context?

Linux Kernel Functional Testing <lkft@...> they usually report
extensive results. So does Guenter Roeck - he maintains a buildbot
that build tests on 30+ architectures and 55+ configs
<--- snip -->
We have this information ins the Testing document Elana and I put
together. This document has information on all of the above bots.
Using these automated testing services to provide evidence of good verification practice would be great, but we'd need to:

a) Identify which tests / test suites are used by each service,
b) Correlate a 'Tested-by' signature for a patch with the corresponding
service and results.
c) Identify the test results from that service which relate to the patch

Starting with a), I find that LKFT has a summary of tests here:

And the FAQ for the 0-day service pointed me at this list of tests:

But I haven't yet worked out how to identify the tests used by



Join { to automatically receive all group messages.