Christopher Gates, Director of Product Security, Velentium06.04.24
In February, the White House released a report entitled, “Back to the Building Blocks: A Path Toward Secure and Measurable Software.”1 While it is nice to have an administration trying to improve the cybersecurity posture of our country, this report is so limited and largely fanciful in scope that it’s mostly irrelevant and can only serve as being aspirational, not operational.
Even the title—Back to the Building Blocks—is misleading; did we have secure products back in the day? If attacking a device, choose one that is 10 to 20 years old. I guarantee it is much easier to hack than just about any recently developed product. Back then, some of us were programming in Assembly language—the complete antithesis of “memory safe.”
It’s also unclear who created this document; there are no attestations as to who wrote it or even contributed to it. Based on the language used, I doubt it was any representatives from operational technology (OT) developers. For example, a chief information security officer is mentioned as opposed to a chief product security officer. The latter could have contributed valuable insights into the cybersecurity differences between IT and OT.
While it’s a positive to have the White House focusing on cybersecurity risks, I wish they would have crafted a more comprehensive and actionable proposed attack plan. The report covers four technical topics:
One idea that’s been suggested is to have hardware that enforces a program’s memory access. ARM v9 implemented this but so far, I have not seen it in actual use. The most recent ARM microcontrollers are v8; memory safe hardware isn’t available for the OT domain. However, my next-generation smartphone with a CPU based on ARM v9 has it. Further, ARM TrustZone (i.e., hardware-enforced secure partitioning of executables) has been around for over 10 years. Unfortunately, very few devices are actually leveraging this technology; it is primarily the basis for Samsung’s Knox virtualized secure CPU system.
Creating a tool that scans source code for bad coding practices is pretty easy compared to understanding the design choices made and how those tradeoffs can affect the security of the system.
Today, we try to discover design vulnerabilities with semi-manual tools like threat modeling, but we are in the early days of threat modeling. As an example, not even the best threat model considers time-domain impacts on the security of a system, such as causing delays in data essential to the performance of the system. Imagine a delay with glucose sensor readings; they don’t reach the closed-loop insulin infusion pump in the expected timeframe. Perfectly valid readings, but not being delivered in a valid timeframe.
We are decades away (if this is ever possible) from having a “magic” scoring rubric that will codify the overall security posture of a device. While such a metric would be nice to have, since we are wishing for the improbable, I’d prefer world peace, racial harmony, or an endless supply of food and clean water.
Mistakenly, this report demonizes C/C++ as the original sin and states if people used Rust instead, all would be perfectly secure. This is utter hogwash! C/C++ makes it easy to write bad software, including all the gray areas of undefined outcomes. However, C/C++ can be made to be memory safe if the programmer carefully uses immutability and “referential transparency.”
The report states: “…up to 70 percent of security vulnerabilities in memory unsafe languages patched and assigned a CVE designation are due to memory safety issues. When large code bases are migrated to a memory-safe language, evidence shows that memory safety vulnerabilities are nearly eliminated.”
This is based on a Microsoft blog article.2 Given that Microsoft is committed to “C-like” languages, why aren’t they converting to Rust?
Further, the latter portion—“memory safety vulnerabilities are nearly eliminated”—comes from a Google blog.3 Since Google is such a huge proponent of Rust, we have to question why memory safety vulnerabilities were not “completely eliminated” instead of just “nearly.”
There are a couple of paragraphs on the 1970 Apollo 13 failed mission, which had nothing to do with cybersecurity, so this seems completely out of place in the report. The anonymous authors then make an odd statement: “…the near disaster was inadvertently caused by the laws of physics…” Do the authors believe the laws of physics are optional? There is nothing inadvertent about physics.
These authors also seem to think you can “test in security,” since all of the mitigations are post-development based, such as static analysis of the code. It is debatable if you could include static analysis as part of the formal methods.
In addition, why isn’t there any discussion of “Secure by Design”? Security is not “bolted on” after the product has been created; instead, it is developed by a conscious application of rules throughout the product’s pre-market and post-market phases.
Rust may eventually replace C, but it will have to arrive there on its own merits. Rust presents with several positives—memory safety, reduced debugging, no garbage collection, and multi-threading. Unfortunately, there are negative aspects as well—the amount of work required to replace the huge quantity of C code that’s been developed over decades, a rather steep learning curve, no standardized version of Rust to use during the validation of a Rust compiler, and a development model that seems more like a hobbyist compiler than a professional tool. In addition, Rust itself is not immune to vulnerabilities.4
Will the future even continue the decades-old development processes we currently follow (i.e., a software engineer converting poorly written textual requirements into hastily written software using software components from unvetted sources)? Or, is it more likely (than everyone converting to Rust) that AI agents will write software in a programming language that no longer matters since AI will only be writing secure code, whether in Rust or Assembly? After all, the security protections in any programming language (including Rust) are there to prevent the programmer from creating vulnerabilities. If vulnerabilities are eliminated due to the training of the AI, the coding language is irrelevant. There would be no need for any security scoring rubrics either.
When I was very young, programming was accomplished by writing the Assembly mnemonics on a piece of paper to be converted, by hand, into the binary values for each opcode. Soon after, we were using high-level language (C, Cobol, Fortran, APL, Simula67, Algol, etc.) compilers, which provided an immense productivity gain, as well as a high level of quality improvement. That’s the type of seismic shift we are about to experience; the use of AI will fix all of these issues, including security, productivity, quality, etc.
While AI is not sophisticated enough yet,5 give it time, and someone will train a large language model with secure coding practices in a programming language few will ever use to manually create a program ever again. That scenario most likely represents the future, not terabytes of C/C++ code being manually converted to Rust.
I am an evangelist for advocating cybersecurity best practices in medical device development, but these need to be practical, workable practices available today or very soon. Grandiose claims about converting all languages used in development to Rust or waiting for the perfect security scoring rubric to address all our questions is not realistic for today’s concerns. We need to refocus on the activities we can accomplish today as those tasks are challenging enough.
References
Christopher Gates is the director of Product Security at Velentium and the current co-chair for H-ISAC’s MDSC. He has more than 50 years of experience developing and securing medical devices and works with numerous industry-leading device manufacturers. He frequently collaborates with regulatory and standard bodies, including the CSIA, Health Sector Coordinating Council, H-ISAC, Bluetooth SIG, and FDA, to present, define, and codify tools, techniques, and processes that enable the creation of secure medical devices.
Even the title—Back to the Building Blocks—is misleading; did we have secure products back in the day? If attacking a device, choose one that is 10 to 20 years old. I guarantee it is much easier to hack than just about any recently developed product. Back then, some of us were programming in Assembly language—the complete antithesis of “memory safe.”
It’s also unclear who created this document; there are no attestations as to who wrote it or even contributed to it. Based on the language used, I doubt it was any representatives from operational technology (OT) developers. For example, a chief information security officer is mentioned as opposed to a chief product security officer. The latter could have contributed valuable insights into the cybersecurity differences between IT and OT.
While it’s a positive to have the White House focusing on cybersecurity risks, I wish they would have crafted a more comprehensive and actionable proposed attack plan. The report covers four technical topics:
- Memory safe programming languages
- Memory safe hardware
- Formal methods
- Cybersecurity quality metrics
Memory Safe Hardware
A concern with this section of the report is the security of the nation is most threatened by operational technology, not information technology. Which would you rather lose to a hospital cyberattack: the ventilator breathing for you or the billing system?One idea that’s been suggested is to have hardware that enforces a program’s memory access. ARM v9 implemented this but so far, I have not seen it in actual use. The most recent ARM microcontrollers are v8; memory safe hardware isn’t available for the OT domain. However, my next-generation smartphone with a CPU based on ARM v9 has it. Further, ARM TrustZone (i.e., hardware-enforced secure partitioning of executables) has been around for over 10 years. Unfortunately, very few devices are actually leveraging this technology; it is primarily the basis for Samsung’s Knox virtualized secure CPU system.
Formal Methods
While these techniques have existed for decades, they are incredibly laborious and, therefore, expensive to perform (e.g., RTCA’s DO-178b for airborne electronic systems). The report even acknowledges this: “While formal methods have been studied for decades, their deployment remains limited; further innovation in approaches to make formal methods widely accessible is vital...” In other words, there is no way the average OT IoT device is going to create formal proofs for the device; it simply isn’t cost-effective.Cybersecurity Quality Metrics
Frankly, this portion is another trip to Fantasyland. If such a tool or metric existed, we would already be using it (and investing in the company selling such a tool). Two kinds of vulnerabilities can be created during the development of a new OT product: design and implementation vulnerabilities. Good static application security testing (SAST) tools already exist and cover a wide range of programming languages, but SAST tools only address implementation vulnerabilities. They do not address design vulnerabilities, nor can they.Creating a tool that scans source code for bad coding practices is pretty easy compared to understanding the design choices made and how those tradeoffs can affect the security of the system.
Today, we try to discover design vulnerabilities with semi-manual tools like threat modeling, but we are in the early days of threat modeling. As an example, not even the best threat model considers time-domain impacts on the security of a system, such as causing delays in data essential to the performance of the system. Imagine a delay with glucose sensor readings; they don’t reach the closed-loop insulin infusion pump in the expected timeframe. Perfectly valid readings, but not being delivered in a valid timeframe.
We are decades away (if this is ever possible) from having a “magic” scoring rubric that will codify the overall security posture of a device. While such a metric would be nice to have, since we are wishing for the improbable, I’d prefer world peace, racial harmony, or an endless supply of food and clean water.
Memory Safe Programming Languages (or, A Love Letter to Rust)
I constantly encounter people who are looking for or claiming their latest product will be the “silver bullet” to all of your security risks. Silver bullets don’t exist and taking action, such as changing to a new programming language, is not going to be a silver bullet either.Mistakenly, this report demonizes C/C++ as the original sin and states if people used Rust instead, all would be perfectly secure. This is utter hogwash! C/C++ makes it easy to write bad software, including all the gray areas of undefined outcomes. However, C/C++ can be made to be memory safe if the programmer carefully uses immutability and “referential transparency.”
The report states: “…up to 70 percent of security vulnerabilities in memory unsafe languages patched and assigned a CVE designation are due to memory safety issues. When large code bases are migrated to a memory-safe language, evidence shows that memory safety vulnerabilities are nearly eliminated.”
This is based on a Microsoft blog article.2 Given that Microsoft is committed to “C-like” languages, why aren’t they converting to Rust?
Further, the latter portion—“memory safety vulnerabilities are nearly eliminated”—comes from a Google blog.3 Since Google is such a huge proponent of Rust, we have to question why memory safety vulnerabilities were not “completely eliminated” instead of just “nearly.”
There are a couple of paragraphs on the 1970 Apollo 13 failed mission, which had nothing to do with cybersecurity, so this seems completely out of place in the report. The anonymous authors then make an odd statement: “…the near disaster was inadvertently caused by the laws of physics…” Do the authors believe the laws of physics are optional? There is nothing inadvertent about physics.
These authors also seem to think you can “test in security,” since all of the mitigations are post-development based, such as static analysis of the code. It is debatable if you could include static analysis as part of the formal methods.
In addition, why isn’t there any discussion of “Secure by Design”? Security is not “bolted on” after the product has been created; instead, it is developed by a conscious application of rules throughout the product’s pre-market and post-market phases.
Rust may eventually replace C, but it will have to arrive there on its own merits. Rust presents with several positives—memory safety, reduced debugging, no garbage collection, and multi-threading. Unfortunately, there are negative aspects as well—the amount of work required to replace the huge quantity of C code that’s been developed over decades, a rather steep learning curve, no standardized version of Rust to use during the validation of a Rust compiler, and a development model that seems more like a hobbyist compiler than a professional tool. In addition, Rust itself is not immune to vulnerabilities.4
Will the future even continue the decades-old development processes we currently follow (i.e., a software engineer converting poorly written textual requirements into hastily written software using software components from unvetted sources)? Or, is it more likely (than everyone converting to Rust) that AI agents will write software in a programming language that no longer matters since AI will only be writing secure code, whether in Rust or Assembly? After all, the security protections in any programming language (including Rust) are there to prevent the programmer from creating vulnerabilities. If vulnerabilities are eliminated due to the training of the AI, the coding language is irrelevant. There would be no need for any security scoring rubrics either.
When I was very young, programming was accomplished by writing the Assembly mnemonics on a piece of paper to be converted, by hand, into the binary values for each opcode. Soon after, we were using high-level language (C, Cobol, Fortran, APL, Simula67, Algol, etc.) compilers, which provided an immense productivity gain, as well as a high level of quality improvement. That’s the type of seismic shift we are about to experience; the use of AI will fix all of these issues, including security, productivity, quality, etc.
While AI is not sophisticated enough yet,5 give it time, and someone will train a large language model with secure coding practices in a programming language few will ever use to manually create a program ever again. That scenario most likely represents the future, not terabytes of C/C++ code being manually converted to Rust.
I am an evangelist for advocating cybersecurity best practices in medical device development, but these need to be practical, workable practices available today or very soon. Grandiose claims about converting all languages used in development to Rust or waiting for the perfect security scoring rubric to address all our questions is not realistic for today’s concerns. We need to refocus on the activities we can accomplish today as those tasks are challenging enough.
References
- tinyurl.com/3bjtc39r
- tinyurl.com/msd2bssk
- tinyurl.com/3k7m6hf9
- tinyurl.com/3e2pwaxv
- tinyurl.com/yc8jshen
Christopher Gates is the director of Product Security at Velentium and the current co-chair for H-ISAC’s MDSC. He has more than 50 years of experience developing and securing medical devices and works with numerous industry-leading device manufacturers. He frequently collaborates with regulatory and standard bodies, including the CSIA, Health Sector Coordinating Council, H-ISAC, Bluetooth SIG, and FDA, to present, define, and codify tools, techniques, and processes that enable the creation of secure medical devices.