Home Blog Device Evaluation Justin, the OEM and the automotive cybersecurity requirements: Part 3

Justin, the OEM and the automotive cybersecurity requirements: Part 3

Author: Rafael Boix Carpi

Justin is on the phone with Alex from Riscure, discussing how to address security requirements from the OEM. After some minutes, Justin realized he needs to align with Chris from the OEM on what is the considered security context (threat model).

(Story continued from previous posts: Part 1, Part 2)

“I understand this process, but I see a gap between these recommendations and the “shall/must”-like phrasing of requirements that we typically get. How do they derive requirements from the goals?” – Justin asked Alex.

“Well, it’s not an easy task, as experience in cybersecurity is needed in order to properly translate goals into requirements. Often one goal involves several requirements. The OEM has a cybersecurity expert (or a team) experienced in avoiding all the common pitfalls while defining the security for a product, and this expert team translated the concept into more specific requirements in the ‘shall/must’ phrasing. The requirements you received are what the OEM wants to have implemented in order to reach their security goals.” – Alex replied

“Wait a second… You just said ‘avoiding all the pitfalls’; what are those pitfalls?”

Justin knows from experience that avoiding pitfalls can save a lot of time during product development (another way for saying extra costs).

Knowing the common pitfalls for integrating security into requirements makes the difference between effectiveness and extra cost in real products.

To understand better these pitfalls, Justin took our online automotive course about security requirements after the phone talk.  If you suspect (or know) that you are running into any of those issues, ask us for help: the sooner you find your way to avoid these common pitfalls, the more money your company save upfront…

Here are some of the typical security pitfalls in automotive addressed in the course

Security composability and the V model: safety engineers are used to derive the safety integrity level from systems by composing the safety integrity levels of components (you can think of probabilities). However, for security it does not work exactly in the same way: different threat models may make composability difficult (if not impossible). In the case of the V development model, teams are typically quite independent: composing security from artifacts output by teams with different threat models may result in flawed solutions.

Product lifetime: many segments where security practices are mature (e.g. payments, content protection) have product lifetimes of around 5 years or less. However, automotive ECUs live within cars that last for more than 15 years. This imposes certain challenges when designing requirements.

Conflicts safety versus security: safety practices and processes are in the DNA of automotive companies. However, safety requirements sometimes will collide with security requirements: if you meet the requirement for safety, you cannot meet the related security requirement. How do you deal with that?

Automotive ecosystem evolution rate: nowadays, self-driving cars, with permanent data connections and

with vehicle-to-infrastructure communication capabilities are becoming common. The fast pace of evolution of the automotive ecosystem radically changes the security landscape: in a very short time we moved from isolated vehicles to fully connected, high-processing power networks. What previously was not possible now it seems to happen: malicious attackers can kill your engine remotely. How does this impact your system?

Legacy developments: when developing a new ECU, your solution does not contain exclusively your own code, but builds on top of drivers/code and hardware artifacts from other companies. Cybersecurity requirements sometimes will require a new software or hardware foundation for your ECU. Is it better to start from scratch, or can you adjust your existing solution?

Completeness on fulfilling security requirements: sometimes security requirements may cause impossible security situations (e.g. maintain integrity by using an obsolete hash function such as MD5), or requirements may be difficult to completely meet (e.g. ensure that encryption algorithm is not broken during product lifetime), or requirements may be met in isolation but create conflicts as a group (e.g. the module shall use AES-256 encryption, the firmware should be encrypted à shall the firmware be encrypted with AES-256?). In all those situations, both the requirements issuer and implementer should be fully aligned on what is meant by the requirements and what does the envisioned solution offer.

After quickly explaining some of the pitfalls, Justin felt better prepared for the call with the OEM. He also followed our online course about security requirements before the call: the new insights he acquired allowed him to effectively communicate with Chris from the OEM, as well as figure out a way to incorporate these security requirements to their existing safety processes. In a couple of weeks, Justin was able to offer an implementation to the OEM, which was judged as satisfactory.

Do you want to know how he managed to do this final step of meeting the OEM requirements? In the last post of this series we will see the final part of the story: how can you integrate security into your company processes.

Share This