Re: Limitations of code quality to ensure safety of modern software


Thanks very much for your comments, agreed – the value of type safe languages such as Rust is well known.  But this relates to only one facet of the issues at hand.

We need to deal with the current commonly accepted state of software development:

  1. An overwhelming amount of code written in C is being deployed in safety critical systems.  No one is going to migrate all that code to C.
  2. Embedded software developers continue to "think" C, even when writing C++ code. 
  3. There are common software practices (such as direct hardware access, incomplete fault handling, type coercion, etc.) which are deeply entrenched in C-minded software.  The mindset needs to change, not (only) the programming language.  But we will continue to live with legacy software, and legacy development methodologies, for a long time.
  4. The safety standards are focused on a waterfall mode, documentation intensive development process which is not economical (as per your note 4 below), not scalable, and not aligned with complex software which is increasingly dependent on open source or even "SOUP" (= Software Of Unknown Pedigree).
  5. Multicore, multithreaded software systems introduce additional levels of complexity, requiring more intense testing focused on dynamic analysis.
  6. Flexible software updates present a challenge for testing and deployment of the "whole" (vs the individual parts).


What can be done?  Some suggestions  (feel free to pitch in here with some more!!):

  1. Support software developers with safe libraries for development and testing.  For example, 
  1. Software developers – change in paradigm.  Python became the de facto programming language for data science because of the abundance of supportive libraries which have evolved in this domain.  Rust and other type safety languages can also become standard, if there are sufficient supporting and safety qualified libraries to make the evolution worthwhile.
  2. Safety standards – change in paradigm.   The safety standards were defined based on legacy development methodologies; they are not aligned with the fast pace and complexity of modern safety critical systems.  There is ongoing work to update the standards and help to provide the necessary framework for qualification of such safety critical software.
  3. In the security domain, it is becoming increasingly obvious that we can no longer rely exclusively on secure software development processes and test tools focusing on static analysis.  Prevention has unfortunately proven to be a losing battle against motivated hackers.  Modern paradigms (e.g., "zero trust") depend on machine learning and dynamic analysis to enable detection and confine the impact of a security breach.

For safety, we need a similar change in mindset and frameworks for dynamic analysis of a complete system for compliance with safety goals defined for that system.   We should continue to focus on the development process (requirements definition; architecture documentation; static analysis; configuration management) in an effective and realistic way, leveraging modern development, build and test tools, without pushing to the extreme of simplistic "waterfall" development processes.  And at the same time, build in to our CI systems strong utilities for dynamic configuration management and testing.



From: dmg <dmg@...>
Sent: Thursday, June 23, 2022 12:16 AM
To: Elana Copperman <Elana.Copperman@...>
Cc: Aggrwal, Poonam <Poonam.Aggrwal@...>; Peter.Brink@...; devel@...
Subject: Re: [ELISA Technical Community] Limitations of code quality to ensure safety of modern software


EXTERNAL EMAIL: Do not click any links or open any attachments unless you trust the sender and know the content is safe.


From a theoretical perspective, it is infeasible to build a program that verifies whether a program is bug free. Even if we define "bug-free" as whether the program finishes or ends in a infinite loop.


Thus, computer scientists have worried about the properties of a program that can be verified. In a way, this is the role of the specification of the language and the compiler on enforcing it (and in some cases the run time library).


One of the simplest verifications that can be done in a program is whether there is a type inconsistency: when a value is calculated, the result is placed in a variable that is consistent with the result. (eg a float is not stored

in an integer). This is probably the biggest advantage of type-safe programming languages (such as Rust). 


The biggest problem is that a checking system has to balance two aspects: completeness and consistency. You can't cover both perfectly. In simple terms, a checking system wants to be consistent (avoid errors) even if

it forbids some programs that are correct (the construct is ok, but it looks "unsafe" to the compiler/checker).


C was built in a time where little of this was understood. Rust, on the other hand, implements this theory regarding type safety.


But being type-safe does not guarantee full safety.


In summary:


1. Perfect verification of most interesting properties of a program is theoretically impossible.


2. rust and other modern languages like Scala aim for type safety, which is better than nothing. C does not have type safety.


3. Perfect verification of most interesting properties cannot be achieved (Gödel's Incompleteness Theorems). This means the languages/run-time libraries place restrictions on the programmers (that languages like C do not). 

 There will always be a tradeoff of the benefits of using Rust vs C. 


4. The economics of software development are against the programmer. Writing safe code requires more thinking/training from the programmer, slowing her/his down. If I am being evaluated on whether I finish a piece of code

or  finish 1/2 of a safer piece of code, which one do you think I will choose? Incomplete code is more "unsafe" than "unsafe" finished code (every time I run unfinished code, it crashes :) 


I believe that in time, most projects will start using Rust instead of C. But this will take time, as most programmers do not learn the basics that are necessary to understand the Functional Programming foundations of Rust.

Also, it will take time for C and C++ libraries to be ported to Rust.


If I remember correctly, Linus discussed (with others) the potential of moving to C++. But his response was (I paraphrase): C is a simpler language, easier to read and to write that C++. I think he also mentioned that more people knew C and C++.


On Tue, Jun 21, 2022 at 8:46 AM elana.copperman via <> wrote:


But the point here is much more subtle: 

For the code submitted to this contest, I would assume that all of the aspects related to product and process quality, including design, are well covered and would be qualified by any safety standard.  But the final result is not safe.

The inherent features of C (and, to a lesser extent, C++) enable the clever developer to write qualified unsafe code.

And although the contest focuses on security issues (which as noted by Poonam is more highlighted), similar tricks can be implemented to evade safety as well as functional features.

The type of testing which would block such code is on a different level altogether.


Bottom line, documented architecture and design; requirements; and classic testing (primarily static); and other aspects dominating safety standards and legacy processes, are not necessarily appropriate mechanisms for ensuring quality and safety of modern software systems, primarily if written in a language such as C.



From: Aggrwal, Poonam <Poonam.Aggrwal@...>
Sent: Tuesday, June 21, 2022 5:51 PM
To: Peter.Brink@...; Elana Copperman <Elana.Copperman@...>; devel@...
Subject: RE: [ELISA Technical Community] Limitations of code quality to ensure safety of modern software


EXTERNAL EMAIL: Do not click any links or open any attachments unless you trust the sender and know the content is safe.

[AMD Official Use Only - General]




I jumped in because I found the topic interesting.


The shortcomings of C in comparison to other type safe languages like Java and Rust are being highlighted in security context too.


Although, I am not sure that today’s automobiles’ SW stacks (AutoSAR, Adaptive AutoSAR) use them. They would be pretty much written in C.




From: devel@... <devel@...> On Behalf Of Brink, Peter via
Sent: Tuesday, June 21, 2022 7:30 PM
To: Elana Copperman <Elana.Copperman@...>; devel@...
Subject: Re: [ELISA Technical Community] Limitations of code quality to ensure safety of modern software


Hi Elana,


Not sure why you directed this to me.  I have always advocated for product and process quality, of which code quality is just one aspect.  The safety of a product, as you say at the end might be compromised by the design, which is why I have been advocating for the quality and safety aspects mentioned above.




From: Elana Copperman <Elana.Copperman@...>
Sent: Tuesday, June 21, 2022 1:02 AM
To: Brink, Peter <Peter.Brink@...>; devel@...
Subject: Limitations of code quality to ensure safety of modern software


Hi Pete,


I don't know if this contest is still being supported.  But as you can see, it highlights the limitations of C as a programming language.

So that ensuring "quality" of any C-based safety critical system is not easy, even before we get to the limitations of Linux and open source.

Unfortunately throughout my career I have seen plenty of code examples which, although not malicious in the sense of this contest, comply with accepted development/coding/test processes – but are inherently unsafe, sometimes by design.



This e-mail may contain privileged or confidential information. If you are not the intended recipient: (1) you may not disclose, use, distribute, copy or rely upon this message or attachment(s); and (2) please notify the sender by reply e-mail, and then delete this message and its attachment(s). Underwriters Laboratories Inc. and its affiliates disclaim all liability for any errors, omissions, corruption or virus in this message or any attachments.




D M German

Join to automatically receive all group messages.