14 Design Security: Preventing Overbuilding and Cloning of FPGA Designs Field Programmable Gate Arrays (FPGAs) and System-on-Chip FPGAs (SoC FPGAs) are extremely versatile integrated circuit components that can be configured by the user with their custom or licensed intellectual property (IP) to perform a wide range of functions. This valuable IP needs to be protected from malicious adversaries throughout its lifecycle. Both the configuration of devices on system circuit boards during the manufacturing phase and in-the-field updates are invariably done in a less secure environment than where the IP was developed. This design note will describe a cryptographically-assured high-security method of programming Microsemi’s IGLOO™2 FPGAs and SmartFusion™2 SoC FPGAs in these less trusted manufacturing locations using certified Hardware Security Modules (HSMs) as part of Microsemi’s Secure Production Programming Solution (SPPS).
Design Security Threats
FPGA IP is subject to a number of security threats such as IP theft, unauthorized cloning, reverse engineering, and the insertion of a Trojan horse. In addition, the systems in which the FPGAs are mounted are subject to overbuilding. The avoidance of all these threats is collectively referred to as Design Security. During the design process, hardware description languages and software programming languages (such as Verilog and C) are converted to a form embodying the IP called the “bitstream” that comprises configuration bits for the FPGA fabric, I/Os, and other hardware blocks, plus binary object code for any embedded CPUs that the SoC FPGA hardware can directly utilize. The design source code and the bitstream need to be kept confidential to avoid threats to
The FPGA (and system) design and verification process is normally carried out by a limited number of trusted engineers and technicians who have a high stake in the project’s success, in a relatively secure location such as the company’s headquarters. For the most critical projects – such as those for national defense or homeland security – the development environment may be very highly secured and only cleared personnel used.
On the other hand, manufacture and configuration of the systems containing the FPGAs are usually done in a much less trusted location, sometimes by employees of a different company (for instance, a contract manufacturer) or nationality altogether, increasing the risk of insiders or adversaries with ulterior motives. Without adequate protections, it may be relatively easy for a malicious individual to capture, substitute, clone, overbuild, or otherwise compromise the intent of the legitimate IP owner(s). These motives may range from the purely economic – such as profiting from overbuilding – to patriotic, such as spying for a nation-state to reverse engineer sensitive IP to steal and export it, or to sabotage systems by inserting a Trojan horse or intentional flaw in the configuration bitstream.
Configuration bitstream updates to FPGAs in systems that have been fielded and are nominally in the end-customer’s hands can also be risky. The customer may be motivated to compromise the system in order to avoid obligations such as money owed, or use the system in a way the manufacturer did not intend or authorize. If not done securely, the field update process itself can introduce additional vulnerabilities.
To mitigate some of these threats, most modern FPGA tool chains can encrypt the configuration bitstream and, using internally stored persistent secret keys, the FPGA can decrypt it at the time it is loaded.
Thus, secret cryptographic keys are the cornerstone of design security in modern FPGAs. They are used to provide confidentiality to the bitstream during initial loading, as well as authentication of field update bitstreams. This raises questions about how the keys are generated, transported, stored, and used securely – much of which happens outside the FPGAs themselves.
An important class of vulnerabilities that applies to any electronic device performing secret cryptographic operations is called Side Channel Analysis (SCA). In it, the adversary may passively monitor data going in or out of the device as well as data gleaned from unintended side channels such as power consumption or low-level emitted electromagnetic fields captured while the device is performing computations using the secret information (such as a secret key), leading to Differential Power Analysis (DPA) and Differential Electro-Magnetic Analysis (DEMA). Other versions of side channel attacks include Simple Power Analysis (SPA) and Timing Analysis (TA). Exposure to this type of analysis for an FPGA can occur whenever a secret key is used to decrypt an incoming bitstream.
For Microsemi’s flash-based FPGAs, the FPGA is typically configured just a few times – for instance, during manufacturing and for field updates. It is even possible to initially configure the FPGA so that it will not accept any field updates. For competitor’s SRAM-based FPGAs, the configuration data is held in non-volatile memory outside the FPGA and needs to be loaded and decrypted on
each and every power-up cycle, creating additional potential vulnerabilities for the bitstream at rest and during transit, and especially for the associated secret decryption keys, whose use is unavoidable at each power-up event.
It is believed that prior to the introduction of the SmartFusion2 and IGLOO2 FPGA families in 2013, the secret bitstream decryption keys of all FPGAs on the market were vulnerable to DPA and/or DEMA attacks that could extract the key in a time greater than a few milliseconds and no more than several minutes. Because the bitstream key is shared over all the devices in a design project, DPA only needs to be done once per project in most cases.
While DPA and DEMA form the basis of some very powerful attacks, effective countermeasures have been developed to mitigate this vulnerability. Many of these countermeasures were invented and patented by engineers at Cryptography Research, Inc. (CRI, now a division of Rambus), which provides licenses for its extensive DPA patent portfolio to companies worldwide, including (since 2010) Microsemi. Side channel monitoring attacks are so ubiquitous and devastating to electronic security that in 2015, over 9 billion chips were produced under CRI DPA patent licenses to mitigate this class of vulnerability; everything from bank payment cards and terminals to set-top boxes, game consoles, postage meters, electric meters… and in the field of programmable logic devices, only Microsemi FPGAs.
DPA countermeasures can be challenging to “get right,” and some critical information may still leak from power or EM side channels if the patented techniques are not implemented correctly. To this end, CRI created an evaluation scheme called the Countermeasures Validation Program (CVP) for assessing the actual security of a device with DPA/DEMA countermeasures, accrediting a number of well-respected independent security laboratories worldwide to perform these assessments.
Microsemi’s SmartFusion2 and IGLOO2 FPGA families were assessed by Riscure’s DPA experts. The assessment included the analysis of hundreds of thousands of physical power and EM measurements. Based on their report, the DPA/DEMA resistance of all seven of the design security protocols used in these two families of FPGAs “is consistent with resistance to an attacker with high attack potential,” and were certified under the CVP scheme by CRI.
Microsemi is the only major FPGA company to have its FPGAs assessed and certified, giving Microsemi the right to use the CRI DPA Security Logo.
Without such a rigorous independent third-party assessment that the bitstream secret keys cannot be extracted from the FPGA, it is hard to have confidence that the bitstream will not be stolen, copied, or manipulated. This is especially true for SRAM FPGAs, whose bitstream is easily available in an external PROM and during loading.
In addition to countermeasures for DPA and other side channel analysis, the FPGA must have countermeasures for several other classes of attacks that can be used to extract the bitstream secret keys or otherwise subvert the design security. For example, a useful feature to provide a very high level of security for storing secret keys on-chip is a Physically Unclonable Function
(PUF). Another important feature in any secure device is the ability to autonomously detect if it is under attack and “zeroize” any critical security parameters such as cryptographic keys.
Microsemi FPGAs are shipped with an X.509 public key certificate digitally signed by Microsemi. The associated Elliptic Curve Cryptography (ECC) P-384 private key is held securely inside the FPGA, protected by the aforementioned PUF. These private and public keys and certificates provide a means to securely identify devices, detect counterfeit devices and other types of supply-chain fraud, as well as a means to establish ephemeral keys during programming sessions that can be used to securely load other keys and bitstreams. Only Microsemi FPGAs ship with private keys and associated public key certificates.
An in-depth discussion of each of these attacks, security features, and countermeasures is well beyond the scope of this design note. This extended digression from production programming into DPA protection is only because DPA is such a pervasive and inexpensive attack that, if successful (and it essentially always is if countermeasures are not deployed), negates any benefit of secure production programming.
Secure Production Programming
FPGA device-level countermeasures provide security for FPGAs in fielded systems so, for example, the keys used for bitstream decryption are not compromised by DPA. This is an important element of an overall secure lifecycle of the FPGA IP and the cryptographic keys used to protect it.
It is equally important to protect the keys during initial production of the systems they are loaded into when the FPGA is initially configured with keys and design IP. As already noted, the production environment may include an insider threat element. Therefore, it would be foolish to load bitstream decryption keys to the FPGA in a production environment in a way where they could be compromised by simple monitoring or man-in-the-middle attacks. In the past, FPGAs supporting bitstream decryption loaded the decryption key in plaintext, where it could easily be compromised simply by monitoring the FPGA inputs during initial programming.
To protect keys throughout their full lifecycle, Microsemi has introduced the Secure Production Programming Solution (SPPS). SPPS allows the user to take advantage of the most secure key modes built into the FPGAs. SPPS consists of both hardware and software elements. When using SPPS, the user adds one Hardware Security Module (HSM) to the FPGA development environment, and one HSM to each production environment. SPPS includes software (SW) tools that run on the development and production computer workstations and on the HSMs themselves. These SPPS SW tools are used to configure the desired FPGA security policy, program the desired number of devices in the production setting, and keep an audit trail of what transpired.
Hardware security modules are specialized computers for high-security applications, such as those applications requiring cryptography. They are specially designed to be tamper resistant (hard to break into) and tamper evident (that is, it is easy to tell if an attempted break-in has occurred). If tampering is detected, the HSM will zeroize all the keys it is storing and production will halt: a strong incentive to not tamper with them. They usually have built-in cryptographic functions including a cryptographic-quality true random bit generator that can be used to create high quality cryptographic keys and nonces. Their security attributes are assessed by third-party security laboratories that specialize in this. The most common certification scheme for HSMs is the U.S. National Institute of Standards and Technology (NIST) Federal Information Processing Standard 140, version 2 (FIPS140-2). FIPS140-2 specifies four security levels from one (lowest) to four (highest). There are a few well-known commercial HSM suppliers worldwide.
Users of SPPS have all their bitstream cryptographic keys and passcodes generated within the certified security boundary of the Thales HSMs. The keys are used exclusively inside the HSM, to encrypt a bitstream, for example; keys never leave the HSM except in encrypted form for storage on the HSM’s host computer.
Details of the Secure Production Programme Solution (SPPS) are on the following page.
As seen in the following diagram, the user develops, verifies, and debugs their FPGA design as usual using the Libero™ SoC Design Suite and associated OEM third-party EDA tools. However, for SPPS users, when the time comes to prepare the final files that will be sent to the less trusted production environment, the bitstream is sent by the tools to the HSM unit where the appropriate keys and passcodes are generated, and the bitstream encrypted. Also generated is an encrypted “job file” that includes the bitstream key that was used, and specifies production parameters, including, most importantly, the number of times the bitstream is authorized to be programmed into FPGAs.
The job file is safely transmitted from the design center to the manufacturing location using any appropriate method, including over the internet (FTP) or by compact disk, because it and the bitstream are protected by encryption.
In the production setting, the programmer is presented with FPGAs to program, typically already soldered to the system boards. The programmer reads the X.509 public key certificate from the FPGA through its JTAG or SPI programming port and validates the Microsemi signature, proving the certificate is genuine. Then the programmer, with the aid of the HSM, requires the FPGA to prove it possesses the associated private key (without revealing it), thus proving the certificate belongs to that specific FPGA and has not been copied from another FPGA. Some other checks are also done, such as matching the device serial number to that recorded in the certificate.
Using the device’s private and public keys, and also fresh key pairs generated by both the FPGA and the HSM (using their respective internal true random bit generators), a shared ephemeral session key is securely established using an algorithm called Elliptic Curve Diffie-Hellman (ECDH), named after the inventors of asymmetric cryptography. This shared secret key, like its predicates, is unique to each FPGA. It is used to securely load the user’s long-term bitstream decryption keys and passcodes into the FPGA where they are stored in encrypted form in nonvolatile memory. Unlike prior methods, the payload keys are authenticated and encrypted during transit using the session key, so they are never exposed in plaintext outside a hardware security boundary; the HSM on one end of the link, and the FPGA at the other.
Because each session is unique to a single FPGA and the transmitted data cannot be used to program another FPGA, the production HSM can easily count the number of FPGAs that have been programmed, stopping once the authorized limit in the job file has been reached. Controlling the number of FPGAs that are programmed can help prevent overbuilding of systems containing them because the overproduced systems cannot be completed by the crooked manufacturer without the ability to program the FPGAs.
The FPGA creates what is called a Certificate-of-Conformance (C-of-C) that digitally proves to the HSM that the correct bitstream has been loaded and locked against overwriting. Certified log files are created by the production HSM that show the serial numbers of all programmed devices, and the disposition of failed programming sessions (if any).
Bitstreams used for field updates are encrypted by the HSM in the design environment using the keys it has saved from when the systems were originally produced. It can only be loaded into FPGAs previously initialized with the same keys. With these keys, the bitstream is safely authenticated and then decrypted inside the FPGA in the field environment, protected by encryption during transit.
Protecting the security of the design IP used to configure an FPGA and program an internal CPU (if one exists) requires a comprehensive combination of techniques that span the entire lifecycle of the design, from manufacturing to fielded systems. These techniques include cryptography, antitamper countermeasures (providing resistance to DPA), and secure key storage mechanisms (like the PUF) in the FPGA itself. Hardware security modules (HSMs) are used in the development and production environments to manage keys and enforce security and business policies.
Together, these elements compose Microsemi’s Secure Production Programming Solution (SPPS), a world-class system for protecting the FPGA user’s design IP and cryptographic keys at all stages. Without each of these interacting elements, design security can be compromised in some way. SPPS or its equivalent and many FPGA countermeasures and security assurances (such as independent third-party certifications) are not offered by any other FPGA manufacturer – making Microsemi’s SmartFusion2 flash SoC FPGAs and IGLOO2 flash FPGAs, when used with SPPS, the best FPGAs on the market for protecting design IP from threats such as IP theft, cloning, reverse engineering, malicious modifications, and for preventing overbuilding of systems with any of these FPGAs in them.