Home Blog Embedded Fuzzing The challenges of Continuous Fuzzing

The challenges of Continuous Fuzzing

Author: Arjen Rouvoet

In this blog post, we discuss a step up to continuous fuzzing and a tool support to make this possible. This is the second blog post in our series about fuzzing embedded systems. Click here to view other blog posts.

The attack area of embedded systems is large. The software must not only be secure in friendly operational conditions but also be resilient in a hostile environment where data may be compromised.

Verifying that your software products are secure in those circumstances is hard. Despite development of a variety of techniques to aid the verification, teams struggle with the integration of these techniques into their software development process. One of the reasons is lack of good, integrated and supported tooling that takes off some of the burden.

At Riscure, we have been working to address those struggles that we see time and time again in embedded software development teams. In this blog post, we discuss a step up from fuzzing to continuous fuzzing and tool support to make this possible.

From One-Shot to Continuous Fuzzing

If something requires an initial investment, then you need to make sure it pays itself back over time. This mindset also applies to keeping software secure over time. The incidental costs of just-in-time security certification are steep and can impact timely release of a product. Instead, we should distribute the costs of delivering secure code over time, by continually monitoring security properties of embedded code. This practice of trying to detect and address issues earlier in the software development process is sometimes called “shift left”.

Dynamic security analysis includes three types of costs:

  1. Acquiring analysis expertise.
  2. Acquiring tooling expertise.
  3. Harnessing of project code.

Analysis expertise is the necessary know-how for being effective with an analysis technique—e.g., knowing what functions make for good fuzzing targets. Tooling expertise  is the more applied knowledge of being able to configure  a particular set of tools (compilers, linkers, analyses, reporting) to produce meaningful results. And finally, harnessing is about preparing project code to run in a test environment.

Without specialized tools, these costs all have to be paid up front, stopping teams from getting started with dynamic analysis techniques such as fuzzing. Riscure True Code addresses this problem in two ways. Firstly, it encapsulates expertise and it simplifies and automates harnessing. This provides a quick and low cost path to get started with techniques such as fuzzing. Secondly, it increases the return on any investment in a good harness or analysis configuration, by enabling reuse over time and across teams. That is, True Code helps transition from a one-shot, last-minute model of performing security reviews, to a continuous model: harness and configure once, benchmark continuously.

The Lifecycle of a Benchmark

So security benchmarks are intended to be configure once, and run continuously over time. For this to work, we split the analysis into an interactive configuration workflow, and a non-interactive execution workflow.

Benchmark lifecycle

The short loop represents the local workflow for setting up an analysis. A developer creates the benchmark configuration and executes it locally. Then they share the configuration with the team and the CI agent. The configured analysis can then be performed repeatedly and autonomously as the project code evolves.

For this to work, the execution of a benchmark must be highly reproducible, across machines, and over time. This is tricky, because embedded C tends to require modification and instrumentation to be fuzzed. This analysis harness is part of the benchmark configuration and is designed specifically for ease of (re)use.

Anatomy of Reproducible Benchmarks

To make embedded C fuzzable, we need to:

  1. isolate the code under test,
  2. stub software and memory-mapped hardware, and
  3. build the code for the fuzzing platform.

A benchmark configuration acts as a definition of changes with respect to the project sources and build. When True Code runs a benchmark, it applies these changes to the latest sources and build, generating a minimal, instrumented version of your project’s code. This is subsequently compiled, run, and analyzed by True Code.

Benchmark anatomy

The benchmark.json is the source of truth for the preparation and execution of the analysis. It looks like this:

{
  "type" : "fuzzing",
  "driver" : {
    "entrypoint" : "./driver.c:driver",
    "linking" : [ 
        "${.}/main.c",
        "${.}/lib/runtime.c"
    ]
  },
  "stubs": {
    "${.}/main.c:load_image": "./stubs/image.c",
    "${.}/main.c:save_image": "./stubs/image.c",
    "${.}/runtime.c:exit": "./stubs/rt.c",
    "${.}/logging.c:exit": "./stubs/log.c"
  },
  "build" : {
    "compileCommands" : [ 
        "${.}/compile_commands.json", 
        "./compile_commands.json"
    ],
    "CPPFLAGS": [ "-I../include/", "-DPLATFORM=x86_64" ]
  },
  "fuzzing" : {
    "stopCondition" : {
      "minutes" : 30
    },
    "triggers" : [ 
        "address",
        "ub",
        "-ub:integer:implicit-sign-change"
    ]
  }
}

 

The sections of the configuration represent the different stages of benchmark execution. The driver section determines the slice of the project sources that we put under test. The stubs section replaces certain globals and function implementations with mock implementations written specifically for fuzzing. Stub implementations can of course be reused across benchmarks. The build section selects the project build configuration to use in the form of one or multiple json compilation databases and can configures additional flags for the preprocessor, compiler, or linker. Finally, the fuzzing section specifies the execution of the analysis with, for example, a list of enabled/disabled crash triggers.

The benchmark configuration and stubs are intended to be committed to version control and shared with the team and the CI agent. This enables others to run the same analysis:

Benchmark execution

Conclusion

There is much more to be said about True Code’s approach to continuously monitoring security using benchmarks. In future blog posts, we will look at the assistance that True Code offers in constructing benchmark configurations, stubs, and also its upcoming support for stubbing memory-mapped hardware using the hardware-abstraction layer for fuzzing.

For now, we conclude this post. We have motivated the need for continuous security benchmarking, distributing the cost of producing secure embedded C over the development cycle. With True Code benchmarks we are supporting this shift to the left, making dynamic analyses such as fuzzing reproducible.

Click here to read the next blog post

Share This