Home Blog Security Highlight Security Highlight: Stretching local attacks too far

Security Highlight: Stretching local attacks too far

Author: Marc Witteman

Consumer device security is typically affected by hardware security threats. As these devices operate in uncontrolled conditions, lacking physical protection, they may be subjected to local attacks in provisional setups and make-shift labs. While these local attacks can have a disastrous effect on security, it can be practically difficult for an attacker to get physical access to their target. Therefore, researchers are interested in understanding whether these attacks can be executed at bigger distance. For instance, when a device leaks confidential side channel information through electro-magnetic emanations, this may also be measured from some distance if a sensitive antenna is used.

Researchers from Cornell and Ben Gurion universities recently came up with a novel approach. Essentially, they convert leakage from one side channel to another, which allows observation from a bigger distance. Their targets were cryptographic implementations in smart cards and smartphones that would leak key material through power consumption. Since the power consumption could only be measured locally they looked for an existing component that would transform that power signal into something that could bridge a bigger distance.

On the transmission side, they selected a power LED as a light source, knowing that high frequency signals transmit easy in light (think of fibers used for high-speed internet). The power LED would be connected to the same power source that fed into the cryptographic processor. As the leakage signal draws current from the power source, this has a small affect on the voltage available for the power LED, which may be observable from a distance. They performed two experiments. One experiment involved a smart card inserted in a reader with a power LED. The other experiment involved a smartphone connected to a power cable also feeding into a bluetooth speaker with a power LED.

On the receiving side they used a digital camera, which was adjusted to allow for a very fast frame rate (60k pictures/s). The camera was shown to be able to detect tiny light fluctuations from the power LEDs at 16 meter distance. Next, filtering software was used to extract the original power signal from the cryptographic device and consequently decode a cryptographic key. All of this is done with custom hardware and could indeed be present at locations where cryptographic functions are performed and where attackers may get access to the camera feed.

So yes, the researchers proved that a local physical attack could be stretched to vicinity and that the threat of local attack may be taken more seriously. However, with the experience of 20 years of side channel attacks in a sophisticated security lab, we can put this evidence in context. We argue that this attack is little more than a fancy demonstration without practical impact.

First of all, the transformation of a cryptographic power consumption signal into a power source fluctuation is a very rough one. While it was demonstrated that some signal remains, this signal is heavily attenuated by the normal inductance and capacitance of the power line and source. Also, the same power source feeds into many more (noisy) processes than just the cryptographic process. This leads to an extreme reduction of the signal to noise ratio. In contrast, when an evaluation lab measures power leakage this would be done as close as possible to the consuming chip, while removing noise and signal flattening components. The demonstrated setup, using a power LED will therefore suffer from extreme information loss.

Secondly, the recording through a video camera has a significant speed limitation. Even while the camera was adjusted to allow for 60k frames/s, this comes nowhere close to the capabilities of professional oscilloscopes that can measure above 1 billion samples/s. This means that fast signal variations, that often matter a lot, would be totally invisible using the chosen attack method.

So, how could this demonstration still succeed? This is because the attackers deliberately chose cryptographic algorithms that are slow and used extremely leaky (outdated) implementations. These implementations would be considered ‘functional proof-of-concept’ and would never pass any security certification. The problem demonstrated in the work has been studied for 25 years, and hundreds of publications have been written on how to avoid or mitigate this. We acknowledge that the attack is original and in theory allows an adversary to execute remotely, without having local test equipment. But, we conclude that such attack may only succeed in cases where the affected application does not warrant a reasonable level of security testing. In that sense, it is a good warning that security testing remains important.

Share This