Re: "Pseudo-DIA" between the Linux Kernel Development Community and its Users - 3/5 Safety for Operating Systems involves functional and quality requirements

John MacGregor <john.macgregor@...>

Mit freundlichen Grüßen / Best regards

John MacGregor

Safety, Security and Privacy (CR/AEX4)
Robert Bosch GmbH | Postfach 10 60 50 | 70049 Stuttgart | GERMANY |
Tel. +49 711 811-42995 | Mobil +49 151 543 09433 | John.MacGregor@...

Sitz: Stuttgart, Registergericht: Amtsgericht Stuttgart, HRB 14000;
Aufsichtsratsvorsitzender: Franz Fehrenbach; Geschäftsführung: Dr. Volkmar Denner,
Prof. Dr. Stefan Asenkerschbaumer, Dr. Michael Bolle, Dr. Christian Fischer, Dr. Stefan Hartung,
Dr. Markus Heyn, Harald Kröger, Christoph Kübel, Rolf Najork, Uwe Raschke, Peter Tyroller

-----Original Message-----
From: Lukas.Bulwahn@... <Lukas.Bulwahn@...>
Sent: Dienstag, 26. Mai 2020 14:16
To: MacGregor John (CR/ADT4) <John.MacGregor@...>
Cc: development-process@...
Subject: AW: "Pseudo-DIA" between the Linux Kernel Development Community and
its Users - 3/5 Safety for Operating Systems involves functional and quality

Hi John,

For this e-mail, I've enhanced the general V-Model with safety
activities / processes. This means that the diagramme covers the
general development lifecycle as well as the safety lifecycle. The
standards are not so clear about that. Rather than having a separate,
parallel V for the safety lifecycle, I've inserted orange "safety"
boxes in the nodes representing each development phase.

In the case of ISO 26262, there is the famous illustration of the 3 Vs
superimposed over the overview of the standards series (Figure 1 in
standard) and Figures 2 & 3 in Part 4 which replace the hardware and
software Vs with boxes. The enclosed illustration seems generally compatible.

It's not generally included in the V-Model, but in the context of
safety-critical systems, there should be backwards traceability
between the requirements and the work products that implement them.

Two points are immediately noticeable:
1) The standards' requirements mostly only cover the tip of the
iceberg of system development activities. (Well, I have to admit I
made the orange boxes small so that they wouldn't interfere with the phase titles,
however (-: ).
2) There is an overlap between safety functionality and operating
system functionality.
The picture does not tell me much and provides so many different ways of
Everyone would agree with the top-level picture, but it is the details below that
actually create the added knowledge and consensus.
Gotta start somewhere....

Those turquoise boxes represent the development process all
application, middleware and, yes, operating system elements in the safety-critical
The system itself is composed of (using a somewhat arbitrary mixture
1) newly-developed safety elements
2) newly-developed non-safety elements
3) pre-existing safety elements that have been used in a similar
4) pre-existing safety elements that have been used in another
context (at least from the 26262 perspective), i.e. another instance
of the same product class
5) pre-existing non-safety elements
6) pre-existing safety components (hardware and software)
7) pre-existing non-safety components (hardware and software) each of
which may have a different certification or qualification route as
well as different generic development processes. The difference
between elements and components seem nebulous to me and I'd rather
call pre-existing things "off-the-shelf", whereby one might have to
differentiate whose shelf they come from.

From the previous e-mail (which admittedly considered only non-safety-
critical systems), a Linux that is currently being selected for use in
an embedded system would belong to category 7 and that is the focus
here. It may soon be the case that safety-critical applications will
use Linux. There may come a time, where safety functionality has been
brought upstream to the Kernel, but these are now not quite the case.

The safety-critical system development process starts by defining the
safety- critical system and the environment (context) within which it
operates. A hazard and risk analysis is then performed to develop the
safety requirements on the system and a corresponding functional
safety concept. A technical safety concept is developed in the system
architecture phase, which ultimately results in safety requirements on
the software architecture, and therefore on the operating system.

At this point the requirements on the operating system should be
functional requirements, for safety mechanisms or safety functions,
and / or requirements on the qualities of those functions (response
time, resource consumption, etc.). Safety functionality, or
mechanisms, include such things as monitoring, periodic testing,
diagnostic and logging functionalities, tolerance mechanisms for
residual design faults in the hardware, environmental stresses,
operator mistakes, residual software design faults, data communication
errors and overload situations; things that may already exist in the
operating system in some form. Refer to 61508-3 a) for a better list.

In other words, the safety-related requirements on the operating
system should already be functional or quality requirements that
should comparable to other requirements on the operating system.
First, you introduce safety requirement, functional requirement and quality
requirement as terms, and I not sure about your definition of a safety requirement.

I will try to give a definition of "two classes of requirements", requirement type A and
requirement type B.

Requirement type A:
A statement about an observable functional behavior of software with evidences that
support that this stated behavior holds under all circumstances.

Requirement type B:
A statement about the absence of an observable functional behavior of software
with an explanation that the existence of the functional behavior would lead to an
unintended system property.

Which of those two are safety requirements (or actually none of those two or both)?
How would you name and classify those?
I really don't know what you're driving at. You'll probably have to give an example.
Maybe I'm flying at too high a level. It doesn't help that 61508 speaks of safety functions
and 26262 speaks of safety mechanisms.

As such, neither Type A nor Type B are safety requirements. Functional behavour is functional behaviour.
If the functional behaviour is related to the functional safety concept, it's safety functionality for me.
If the unintended system property violates the functional safety concept, it's safety-related. Otherwise
it's not. Requirements related to those behaviours or system properties could be safety requirements,
or maybe not.

In type B, it's also not clear to me whether you mean that the absence of the behaviour is observable
or the behaviour is observable and hasn't been observed.

Generally, I can agree to the concepts and terminology of 26262-4 6 (Technical Safety
Concept). Perhaps I've been a little the terminology, especially by saying
safety requirement when I should have said technical safety requirement.

What I was driving at in this section is that I think that there is a point in software development
Where it leaves the realm of safety experts and enters the realm of safety laymen. It then
becomes general software development under quality control.
It's obvious that c is c and the code is just implemented in c, regardless of whether it's being
coded for safety-related functionality or not. The question is how much higher
that transition can be set.

I think that the interface could lie in the technical safety requirements stage (and by extension,
the plain old technical requirements stage of general software development).

I can live with the distinction you've made between types A and B requirements. I
just don't see how it's germane to defining the development interface.

I try to avoid the word safety requirement, because it often mixes requirement type
A and type B (in various flavors) and resolution always seems difficult... There are
too many interpretations of the word safety requirement and it is used to often
without proper definition or mismatching definitions in communication, especially
when requirements and elements are considered without any reference to a clearly-
defined system context.
For my purposes the responsibility for managing safety requirements lies with the
system developer, although some real DIAs probably delegate it to the supplier.

The point here is that Linux has always (up to now) been developed as
functionality. It may be possible to isolate the safety-related parts
of that functionality and, as part of the systems engineering part of
the development process, attach quality requirements to them and
validate that the requirements have been achieved. For me, this would
be the development interface for the DIA.
John, In your understanding:

Is the development process group working on building blocks/basis/methods/basic
information that can be used to attach and validate quality requirements to a subset
of functionality that was previously developed?

The answer is just to ensure that I can map your explanation to other activities
ongoing, or if you have something different in mind.
Another question that seems to come out of left field... but in the end caused me to connect a couple
of dots.

My understanding is that the development process group is working on defining a reference
development process (not a reference Linux development process) which is as effective as the development
processes underlying ISO 26262 and IEC 61508 and demonstrating the Linux development process is
equivalent to that reference process [1]. Currently the group is working on defining the current Linux development
practice in terms of the development process underlying the two abovementioned standards. This amalgamated
process would be the reference process.


My understanding is that the activities of the workgroup are limited to those listed above and that subsequent
activities will be defined on completion of these tasks.

So, no, my understanding is that they are not directly working on building blocks/basis/methods.. etc. that can
be attached to anything. Their goal is to demonstrating equivalence. Anecdotal evidence seems to indicate
that the difference between previously developed and bespoke developed artefacts is too philosophical to be
considered. The same might hold for the difference between potentially safety-related and safety-unrelated
areas of the Kernel.

I partially addressed my lack of understanding of the group's activities in an e-mail on the list on the 24th of
February (Thread: Toolchain / build process), where, in particular, I addressed the role of system integrators
and suggested that the group sit down and define a couple of use cases to guide them. Nobody responded.
I guess I'm doing that now.

So, I guess the following questions emerge:
- Is the reference process QM? Or (A)SIL X?
- How are safety-related aspects going to be handled, if at all? That is, the (reference development process for) OS functionality
used by safety mechanisms and various encapsulation technologies (maybe more...)
- What measures and techniques are applicable in the current assessment of the various Linux process areas?
- Does the current Linux development process apply to all (relevant) code in the repo?
- Perhaps, does the development process play the same role in assuring the integrity of off-the-shelf
software that it does for bespoke software? (26262 seems to have that answered, but for other

Aaah... after staring at your question for a while longer and looking at the text passage preceding it, I came up
with another interpretation. When I wrote that it might be possible to isolate the safety-related parts
of the functionality and attach quality requirements to them, I was thinking that the system integrator would
do that. Were you thinking that I was proposing that the workgroup do that?

Enjoy your vacation, I will be busy for the rest of this week, and I need to catch up
with writing down the previous insights of the thread...



Join to automatically receive all group messages.