Home Blog Industry Updates Is software security attainable?

Is software security attainable?

Author: Marc Witteman

Abstract

Software security is widely considered an increasing concern. Daily reports of data breaches and hacked products feed the perception that everything is broken. While understandable, this view is too simple and incorrect. In software security, assurance is typically achieved by a combination of manual and automated activities that systematically find issues that can be solved. Software security evaluation identifies vulnerabilities that could lead to exploitation in the field. It also produces statistics, revealing the maturity of the development team and the progress in code quality (as a result of bug fixing). Even though it is understood that a final product may not be free of vulnerabilities, the evaluation process can provide trust that the product will not be trivially exploited in the field.

Introduction

We are increasingly dependent on products that run on software, while at the same time, the number of computer incidents and data breaches is alarmingly high. It almost seems like every product is weak. At Riscure, we evaluate the security of connected devices and try to help developers make more secure products. Even though we confirm the significant risks, we also see the benefits of various evaluation methods, and we believe that strong software is attainable.

In this article, we share some of our observations in software security and the benefits of evaluation. Our experience includes thousands of evaluation projects in the field of firmware, operating systems, and applications. It is important to realize that perfect security does not exist, as any product can be broken given enough time and money. Ultimately, perfection is irrelevant, and it is sufficient to increase attacker costs beyond what they are willing to spend. Therefore, the industry uses the term assurance rather than proof when approving products. Evaluation is a term to describe all activities (review, analysis, simulation, testing) to collect the evidence needed for assurance.

Specification

The development of a product starts with requirements and design. An overseeing body, a customer, or even the developer itself can set requirements. They can be specific, addressing known threats, or non-specific, referring to generic properties. A design is made by the developer and typically explains how requirements are met, specifying countermeasures that mitigate threats. We know that virtually no product is free of bugs, and want to stress the importance of threat mitigation to prevent attackers from exploitation.

An evaluator can review requirements and design and verify whether these address and mitigate the relevant threats. This work is manual and paper-based and can build trust in the developer as an organization that understands the threats and knows how to mitigate them. We use the identified design weaknesses to help a developer improve the product and/or select potentially interesting attack paths to increase the efficiency of implementation evaluation.

Implementation

During the implementation phase, the design is converted into actual code. This is an error-prone process, especially since many dangerous vulnerabilities are non-functional and therefore non-trivial to detect. The size of code keeps growing (following Moore’s law), resulting in very large software products. Even a standard IoT device may contain more than 1 MLoC (million lines of code), while complex products can exceed 100 MLoC.  We observe that new code from unexperienced development teams may initially have up to 10 vulnerabilities per KLoC (thousand lines of code). Without expert analysis, it is clear that such products, having thousands of security issues, are easy victims of software hacking in the field.

So how do we find the bugs?

During the implementation phase, we typically apply white box evaluation methods, meaning that the developer gives us full access to the source code. We then apply a mix of manual and automated methods.

Manual code review is a powerful evaluation method since a strong expert will be able to find the most complex issues. A disadvantage is the lack of scalability of manual review. Our experts can review about 1k lines per day, so manually reviewing a 1MLoC source code package becomes easily unpractical and unaffordable. This constraint can be somewhat mitigated by careful selection of, and focus on, the critical code, i.e., the code that is most sensitive to exploitation. Automated code analysis is an essential add-on to deal with scale. Here we distinguish static and dynamic code analysis.

Static code analysis is similar to manual code review but is also limited to relatively simple vulnerabilities. Ample tooling exists to do this (including our True Code product). While this method is fast and scalable, it also suffers from false positives and negatives. In the context of a security evaluation, we use experts to validate and filter the results of static code analysis.

Dynamic code analysis actually executes code while scanning for bugs. This avoids false positives and can explore bugs in complex situations. Simulation and fuzzing are examples of these techniques. While powerful, they have the disadvantage that configuration and anomaly analysis is complex and requires expert involvement.

With this mix of methods, we evaluate hundreds of products annually, find thousands of security issues, and help our customers build a much better attack resistance. Repetitive evaluation for evolving products allows detecting trends in metrics. Using these metrics, a developer can monitor progress in security quality and signal the need for training or additional evaluation work.

Operation

While much of our work focuses on implementation, it is important not to ignore threats during operation. First, many security issues in the field originate from bad configuration (e.g., weak passwords) or late patching. Additionally, supply chain issues, software updates, and emerging attacks can be reasons for testing during operation.

Typically, operational testing is a black box activity, evaluating the product under conditions in the field. While these tests may be less efficient than white box tests, they do approach real-life risks best and give a fair assessment of how the product holds up in real life. When there is a concrete concern about operational security, we find that an attack is possible indeed. Our assessment helps to judge whether product operation can continue securely or that there is a need for other actions.

Conclusion

We know that security is difficult and that developers need to make significant investments to achieve assurance. At the same time, we see that our evaluation methods make a strong contribution to product security. When we work with developers over time, we often see a gradual decline in the bug density. The level and decline of bug density can provide assurance that a product is sufficiently secure. Even though it is understood that the final product may not be free of vulnerabilities, the evaluation process can then provide trust that the product will not be easily broken in the field.

Share This