Hardware assisted root of trust mechanism and coreboot internals

I started working for 9elements in October 2020 and my first assignment was to get Intel CBnT working on the OCP Deltalake using coreboot firmware. Intel Converged Bootguard and TXT is a hardware assisted method to set up a root of trust. In this blog post I will discuss some of the changes needed in coreboot to get this working. Setting CBnT up properly was definitely a challenge, but the work did not stop there. So while Intel CBnT provides a method to verify or measure the initial start-up code, it is not enough. You want to trust the code you run from start, the reset vector, to end, typically a bootloader. CBnT only takes care of the start. You have to make sure that each software component trusts the assets it uses and the next program it loads. This concept is called a chain of trust. Now in 2021 I have an assignment that involves supporting the older Intel Bootguard technology. Since Bootguard is very similar to CBnT, I'll also touch on that.

Intel Converged Bootguard and TXT: a root of trust

Intel CBnT merges the functionality provided by TXT and BtG in one Authenticated Code Module (ACM). This is a code module signed by Intel that runs on the main CPU before the traditional x86 reset vector is run at address 0xFFFFFFF0. The job of this ACM is to measure and/or verify the main firmware, depending on the 'profile' that has been set up in the Intel Management Engine (ME). The profile determines what happens in case of a measurement of verification error. Strong policies entirely halt the system in case of failure, while other ones just report errors but still continue booting. The latter is often desirable for servers in a hyperscaler setup such that the system admin can still run diagnostics on the system. A few different components are needed in a working CBnT setup. The ACM was already mentioned. The ACM is found by the hardware by a pointer in the Intel Firmware Interface Table (FIT), which itself is found via a pointer at the fixed location 0xFFFFFFC0. Other necessary components are the Key Manifest (KM) and Boot Policy Manifest (BPM), which are also found in the FIT. The chain of trust is started in the following way: the Intel ME has a fuse which holds the hash of the public key of the KM. This can either be set up with 'fake-fusing' for testing CBnT where the hash can still be changed afterwards. On production systems the hash will be permanently fused. The ACM compares that fused hash to the public key that is inside the KM, which is signed with the KM private key. The KM itself holds a hash of the BPM public key which is compared to the public key stored in the BPM. The BPM is signed with the BPM private key. The role of the BPM is to define what segments of the firmware are Initial Bootblock (IBB). The BPM contains a digest of the IBBs and as such establishes a root of trust if the digest in the BPM matches what is on the flash. More in depth info on this in intel boot guard. This is what Bootguard also did. What CBnT offers on top is the TXT functionality. The IBBs are measured into PCR0 of the TPM. Other TXT functionality like clearing memory or locking down the platform before setting up DRTM with the SINIT ACM is also provided by the CBnT ACM. See the TCG DRTM for more info on this. Merging the TXT functionality makes CBnT ACMs much bigger than BtG ACMs (256K vs 32K depending on the platform). This KM and BPM separation has in mind that there is one hardware owner, but multiple OEMs. The hardware then always gets fused with the key of the owner. Each OEM might want to roll their own firmware and has it's own BPM key. The owner then creates a KM with the OEMs BPM key hash inside it. The OEM can then generate their own BPM that matches the firmware they intend to use. KM and BPM also provide security version numbers that can be enforced. So the same hardware can have different OEMs during its lifetime. The previous OEM won't be able to generate a working CBnT image for the new OEM. The strength of a CBnT setup and the trustworthiness it provides lies in the following things:
  • the signature verification of the ACM done by the ME/Microcode (Intel)
  • ACM signing key remaining private (Intel)
  • the verification done by the ACM, which is a closed binary (it's 256K so not so small)
  • KM keys remaining private (Manufacturer)
  • OEM keys remaining private (OEM), KM security version number can be updated to make OEM keys obsolete though
  • The IBB needs continue the chain of trust which is what the next section will be about

Chain of trust

Coreboot essentially supports 2 different methods for setting up a chain of trust. You have measured boot and verified boot. Verified boot is conceptually easier grasp. Each software component that is run after the bootblock is signed with a private key. The trusted bootblock has the corresponding public key and uses that to verify the integrity of the next program. If the signature does not match the binary then the firmware can report an error to let the user know that their system cannot be trusted or the boot process can even be fully halted if that is desired. Securing your firmware using measured boot works a bit differently and involves a TPM. The idea with measured boot is that before a component is used, it is digested and the hash is stored in the extend-only PCR registers of the TPM. When the boot process is done, the user reads out the TPM and if the PCRs don't match with what you expect, you know that the firmware has been tampered with. A common use case is where the TPM can be read out remotely independent from the host. If the TPM PCR values don't match, that system won't be allowed access to the main network. The admin can then take a look at that system, fix potential issues and allow that system back online after those have been resolved. Historically Google first added verified boot to coreboot with their VBOOT implementation. Measured boot was later added as an optional feature to VBOOT. Now measured boot can be used independently from Google's VBOOT. Google's VBOOT was built with ChromeOS devices in mind. Those devices use very varying hardware: different SOCs from Intel, AMD and many ARM SOC vendors. To support all those different SOCs Google uses a common flash based root of trust. The root of trust lies in a read only region of the flash which is a feature of some SPI flash which is kept in place by holding the /WP pin low. The verification mechanism resides in the read only region and verifies the firmware in FW_MAIN_A/B FMAP partitions. The RO region of the flash does hold a full firmware for recovery. This copy of the firmware is also considered trusted. Such a VBOOT setup does not work that well with a root of trust method like CBnT/Bootguard. With Bootguard some initial parts of the firmware are marked as IBB and the ACM will verify those. It's up to the firmware and assets in those IBB to continue the chain of trust and verify the next components that will be loaded. A fully trusted recovery image in 'RO' region would need to be marked as IBB. CBnT/Bootguard were however not designed for that, as the IBBs are then too big. Only code that is required to set up a chain of trust ought to be marked as IBB, not a full firmware in 'RO' region. In more practical terms, if the bootblock is marked as IBB with Bootguard, the romstage that comes after it cannot be a romstage in the in 'RO' FMAP region as there is no verification on it. VBOOT needs to be modified to only load things from FW_MAIN_A/B. Just not populating the RO FMAP with a romstage is not sufficient. An attacker could just take a working Bootguard image and manually add a romstage in the RO cbfs. The solution is to disable the option for a full recovery bootpath in VBOOT. A note about the future: some work is being done to have per cbfs file verification. This would fit the Bootguard use case much better as it removes the need to be careful about what cannot be in the VBOOT RO region. ​Another difficulty lies in what to mark as IBB. An obvious one is the bootblock as that code gets executed 'first' (well after the ACM has run). But the bootblock accesses other assets, typically quite early in the bootflow. The CPU starts in a bare state: there is no RAM! A solution is to use CPU cache as RAM. This setup is rather tricky and the details are not always public. So sometimes you are obliged to use Intel's FSP-T to set up an environment in which you can execute C code. Calling FSP-T therefore happens in assembly and for this reason verification on the FSP-T binary cannot happen this early. Even finding FSP-T causes problems! FSP-T is a cbfs file and to find cbfs files you have to walk from bottom to top until you find the proper file. This is prone to attacks: someone can modify the image/cbfs such that other non trusted code gets run instead. The solution is to place FSP-T at an address you know at buildtime which the bootblock code jumps to. FSP-T also needs trusted so it has to be marked as IBB and verified by the ACM. Ok, now we are in a C environment but we’re are not there yet. We need to set things up such that we can verify the next parts of the boot process. For that we need the public key which lies in the "GBB" fmap partition. FMAP partitions are found via the "FMAP" fmap partition whose location is known at buildtime. So again both "FMAP" "GBB" need to be marked as IBB, to be verified by the ACM. With VBOOT there is the option to do the verification in a separate stage, verstage. Same problem here too: it's a cbfs file which can only be found at runtime. Here the solution is to link most of the verstage code inside the bootblock. As it turns out this is even a good idea for most x86 platforms using a Google VBOOT setup. You have one stage less so less code duplication. It saves some space and is likely a tiny bit faster as less flash needs to be accessed which is a slow operation. Other things are done in the bootblock like setting up a console. The verbosity of the console is sometimes fetched with board specific methods relying other parts of the flash. So again this needs to be fetched at a location known at buildtime and marked as an IBB or simply avoided or done later in the bootprocess. So the conclusion is that all assets that are used before the chain of trust setup code is run (VBOOT setup or measured boot TPM setup) need to be referenced statically, searching for them cannot be done and they need to marked as IBB with Bootguard.

Converged security suite

CSS is an open source project maintained by 9elements. It is written in go, which makes it quite portable. It's a set of tools and libraries related to firmware and firmware security. One such tool is cbnt-prov. It is integrated in the coreboot buildsystem and can properly set things up for Intel CBnT, by generating a KM and BPM. It parses a coreboot image and detects what segments need to be marked as IBB automatically. It is however not just a coreboot specific tool to glue things together for CBnT. It supports dumping information on the CBnT setup for generic UEFI images too. It can take an existing setup, turn it into a configuration file, which can be reused later on, for instance if you want to deploy the same firmware but with different keys. One last important feature is to be able to do validation on an existing image. We are working hard on an equivalent tool for bootguard that will be called bg-prov. We hope to get this ready for production soon. ​This is a big step forward in the usability of coreboot as previously you were bound to proprietary tools provided by Intel that were only accessible under NDA and has usability issues as they are Microsoft Windows executables. Coreboot is the best open source X86 firmware at this day and having fully free and open source software to cover the common use case of Intel Bootguard and CBnT makes coreboot a more attractive firmware solution. We hope that this improves its market adoption!

The Future of Open-Source Firmware on Server Systems

The Future of Open-Source Firmware on Server SystemsOpen-Source Firmware for Host Processors is already quite prominent in the embedded world - we do have a lot of systems running on u-boot or nearly every Chromebook which is not older than 5 years is running on coreboot (Surprise, Surprise!). In addition Intel does officially support coreboot and their Firmware Support Package (FSP) in API mode, which is mandatory to be supported in coreboot. So future is looking bright for the embedded world - But what does the server market looks like? Disclaimer: All information have been gathered together through public documents or talks on conferences - none of this information has been officially confirmed by any of the SoC vendors.
The answer is: It depends. It depends on whom you're talking to and which SoC vendor you are talking about.

Intel

Intel announced on the OCP Tech Week 2020 (again), that they will support FSP & coreboot on IceLake platforms and beyond - that means that the new upcoming platform Sapphire Rapids SP will also be supported with coreboot and FSP. Intel made quite a transformation over the latest generations of Xeon-SP platforms. There is a Proof of Concept with coreboot and FSP with Skylake-SP on the two socket server platform OCP Tioga Pass. This landed upstream in the coreboot repositories and is still maintained and functional. However the access to the FSP needed to build the platform is still under NDA and will not be maintained by Intel anymore.
The Future of Open-Source Firmware on Server Systems
OCP Community Logo - 9element is supporting the OCP as an OCP Community Member
The next step was to enable coreboot on the current generation Cooper Lake. This was done on the OCP Delta Lake server, which is a single socket server running coreboot and FSP. Interestingly, the OCP Delta Lake is the first server to pass the OCP Open System Firmware (OCP OSF) Guidelines which are mandatory now for new platforms to get the OCP Accepted Certification. More information on the OCP OSF initiative can be found online.
The Future of Open-Source Firmware on Server Systems
Summary Intel Roadmap for Xeon-SP platforms
On the latest OCP OSF Project Call from April 29th, 2021 AMI, one of the biggest closed-source IBV, stated to get involved in Open-System Firmware. This should give the OSF community even more confidence that Intel holds on their open-source firmware strategy. What a bright future!

ARM

Ampere Computing is a regular guest on the OCP OSF call and presented their LinuxBoot solution on the latest OCP Tech Week 2020. Also Ampere's Arjun Khare presented the latest open-source firmware efforts on the FOSDEM 2021 - by nature ARM always has been more open when it comes to firmware than x86 SoC vendors. Ampere is definitely one of the companies to watch out for in the open-source firmware space. In general ARM is picking up more and more speed in the server world and might be moving into the broader spectrum.

AMD

Even though AMD is heavily involved in open-source firmware on their consumer platforms - nothing is publicly known yet about their efforts to support open-source firmware on server platforms. Our friends from 3mdeb made a presentation at the Fosdem2021 about the current state of OSF on AMD platforms. Bottom line: Nothing is publicly known, however AMD is hiring coreboot developers (mainly for their mobile line) but rumors go around that they're working on something. One main push could have been Ron's presentation on the Open-Source Firmware Conferences 2020 on booting an AMD Rome server board with open-source firmware - This has caught quite some attention. Still AMD has not made any information public - so we need to wait if there is more to come.

TL;DR

Intel is currently one of the top-pushing companies in the open-source firmware space. Also the OCP's Open System Firmware initiative is redefining the boundaries for server systems - overall we do quite some movement in the open-source firmware world - however most of the information is still not publicly confirmed and can only be shared through NDA's. We hope this changes in the future. 9elements does have a good working relationship with various SoC vendors - we specialized on building open-source firmware for scalable server systems and are able to support the newest generations. We are working on a regular basis with OCP and other scalable server systems. If you would like to talk about OSF on a server system - Get in touch with us!

Announcing coreboot 4.14

coreboot 4.14 was released today, on May 10th, 2021.

Since 4.13 there have been 3660 new commits by 215 developers.
Of these, about 50 contributed to coreboot for the first time.
Welcome to the project!

These changes have been all over the place, so that there's no
particular area to focus on when describing this release: We had
improvements to mainboards, to chipsets (including much welcomed
work to open source implementations of what has been blobs before),
to the overall architecture.

Thank you to all developers who made coreboot the great open source
firmware project that it is, and made our code better than ever.

New mainboards
--------------

* AMD Bilby
* AMD Majolica
* GIGABYTE GA-D510UD
* Google Blipper
* Google Brya
* Google Cherry
* Google Collis
* Google Copano
* Google Cozmo
* Google Cret
* Google Drobit
* Google Galtic
* Google Gumboz
* Google Guybrush
* Google Herobrine
* Google Homestar
* Google Katsu
* Google Kracko
* Google Lalala
* Google Makomo
* Google Mancomb
* Google Marzipan
* Google Pirika
* Google Sasuke
* Google Sasukette
* Google Spherion
* Google Storo
* Google Volet
* HP 280 G2
* Intel Alderlake-M RVP
* Intel Alderlake-M RVP with Chrome EC
* Intel Elkhartlake LPDDR4x CRB
* Intel shadowmountain
* Kontron COMe-mAL10
* MSI H81M-P33 (MS-7817 v1.2)
* Pine64 ROCKPro64
* Purism Librem 14
* System76 darp5
* System76 galp3-c
* System76 gaze15
* System76 oryp5
* System76 oryp6

Removed mainboards
------------------

* Google Boldar
* Intel Cannonlake U LPDDR4 RVP
* Intel Cannonlake Y LPDDR4 RVP

Deprecations and incompatible changes
-------------------------------------

### SAR support in VPD for Chrome OS

SAR support in VPD has been deprecated for Chrome OS platforms for > 1
year now. All new Chrome OS platforms have switched to using SAR
tables from CBFS. For the next release, coreboot is updated to align
with the Chrome OS factory changes and hence SAR support in VPD is
deprecated in [CB:51483](https://review.coreboot.org/51483). Starting
with this release, anyone building coreboot for an already released
Chrome OS platform with SAR table in VPD will have to extract the
"wifi_sar" key from VPD and add it as a file to CBFS using following
steps:
 * On DUT, read SAR value using `vpd -i RO_VPD -g wifi_sar`
 * In coreboot repo, generate CBFS SAR file using:
    `echo ${SAR_STRING} > site-local/${BOARD}-sar.hex`
 * Add to site-local/Kconfig:
    ```
     config WIFI_SAR_CBFS_FILEPATH
       string
       default "site-local/${BOARD}-sar.hex"
    ```

### CBFS stage file format change

[CB:46484](https://review.coreboot.org/46484) changed the in-flash
file format of coreboot stages to prepare for per-file signature
verification. As described in the commit message in more details,
when manipulating stages in a CBFS, the cbfstool build must match the
coreboot image so that they're using the same format: coreboot.rom
and cbfstool must be built from coreboot sources that either both
contain this change or both do not contain this change.

Since stages are usually only handled by the coreboot build system
which builds its own cbfstool (and therefore it always matches
coreboot.rom) this shouldn't be a concern in the vast majority of
scenarios.

Significant changes
-------------------

### AMD SoC cleanup and initial Cezanne APU support

There's initial support for the AMD Cezanne APUs in the tree. This code
hasn't started as a copy of the previous generation, but was based on a
slightly modified version of the example/min86 SoC. During the cleanup
of the existing Picasso SoC code the common parts of the code were
moved to the common AMD SoC code, so that they could be used by the
Cezanne code instead of adding another slightly different copy.

### X86 bootblock layout

The static size C_ENV_BOOTBLOCK_SIZE was mostly dropped in favor of
dynamically allocating the stage size; the Kconfig is still available
to use as a fixed size and to enforce a maximum for selected chipsets.
Linker sections are now top-aligned for a reduced flash footprint and to
maintain the requirements of near jump from reset vector.

### ACPI GNVS framework

SMI handlers for APM_CNT_GNVS_UDPATE were dropped; GNVS pointer to SMM is
now passed from within SMM_MODULE_LOADER. Allocation and initialisations
for common ACPI GNVS table entries were largely moved to one centralized
implementation.

### Intel Xeon Scalable Processor support is now considered mature

Intel Xeon Scalable Processor (Xeon-SP) family [1] is designed
primarily to serve the needs of the server market.

coreboot support for Xeon-SP is in src/soc/intel/xeon_sp directory.
This release has support for SkyLake-SP (SKX-SP) which is the 2nd
generation, and for CooperLake-SP (CPX-SP) which is the 3rd generation
or the latest generation [2] on market.

With this release, the codebase for multiple generations of Xeon-SP
were unified and optimized:
* SKX-SP SoC code is used in OCP TiogaPass mainboard [3]. Support for
this board is in Proof Of Concept Status.
* CPX-SP SoC code is used in OCP DeltaLake mainboard. Support for
this board is in DVT (Design Validation Test) exit equivalent status.
Features supported, (performance/stability) test scopes, known issues,
features gaps are described in [4].


[1] https://www.intel.com/content/www/us/en/products/details/processors/xeon/scalable.html
[2] https://www.intel.com/content/www/us/en/products/docs/processors/xeon/3rd-gen-xeon-scalable-processors-brief.html
[3] ../mainboard/ocp/tiogapass.md
[4] ../mainboard/ocp/deltalake.md

Netboot.xyz is now Part of LinuxBoot

Netboot.xyz is now Part of LinuxBootWe recently worked on some patches to adopt netboot.xyz and integrate it into LinuxBoot - and it got merged now.

Netboot.xyz

So - what is netboot.xyz? From there website:
netboot.xyz is a way  to PXE boot various operating system installers or utilities from one  place within the BIOS without the need of having to go retrieve the  media to run the tool.
In our last blog article we already pointed out some development work and what motivated us - basically we need a reliable way to install operating systems on machines sitting either somewhere in a rack not accessible for us, or which do not have any external USB ports. Our former way was to build a busybox image which downloads a disk image containing a minimal Linux operation system into the RAM. Once downloaded we would dd the image on a hard drive - and off you go. However that approach needed a lot of manual tooling and adjustment to the current platform we are working on - and netboot.xyz already has a process in place - so adopting this to u-root only seems logical. It's open-source, that's the idea right?

netboot.xyz Image Generation Process

netboot.xyz already has an image processing and generation process in place which we will use to download the images from u-root.
Netboot.xyz is now Part of LinuxBoot
Abstract Image Processing and Generation Process from netboot.xyz
All the assets generated by the netboot.xyz build pipelines are accumulated in one .yaml file which can be found on Github. These endpoints.yaml file does contain kernel, initrd and squashfs locations in the following manner:
ubuntu-19.10-live-kernel:
    path: /ubuntu-core-19.10/releases/download/19.10-055f9330/
    files:
    - initrd
    - vmlinuz
    os: ubuntu
    version: '19.10'
[...]    
ubuntu-19.10-KDE-squash:
    path: /ubuntu-squash/releases/download/9854741e-b243fefb/
    files:
    - filesystem.squashfs
    os: ubuntu
    version: '19.10'
    flavor: KDE
    kernel: ubuntu-19.10-live-kernel
This endpoints.yaml file is used to build the u-root netboot.xyz menu:
Netboot.xyz is now Part of LinuxBoot
netboot.xyz menu in u-root
Typing the number of the OS opens the submenu which let's you choose the version of the Operating System you want to boot.
Netboot.xyz is now Part of LinuxBoot
Typing in e.g. 07 will boot Debian 10 Core. Be aware - only some major distrobutions have been tested and verified working - Everything in the Other menu can be deemed has experimental and might not work properly.
netboot.xyz provides you a convinent way on how to boot into a live system on your machine. As we are working a lot with server machines where we do not have direct hardware access to, merging netboot.xyz into u-root gives us an easy way to install an operating system on a remote machine during development. If you like to know more about netboot.xyz, check out their homepage. The corresponding code in u-root can be found here. If you like to talk with us about firmware - feel free to contact us!

Converged Security (CBnT) coreboot support and tooling

Converged Security (CBnT) coreboot support and toolingWe are happy to announce that the Converged Security Suite got some major updates and arrived at release version 2.6.0. Not only does this release provide new features, but also comes with a new look! Thanks to the design team at 9elements.com we do have our own logo now.
Converged Security (CBnT) coreboot support and tooling
Some time ago we renamed the TXT-Suite to Converged Security Suite in order to cover more than just Intel Trusted Execution Technology. Now we are making the first step into this direction by releasing the new cbnt-prov tool as part of the 9elements CSS. We here at new 9elements are passionate about security and open-source firmware - and we had the chance to enable Intel Converged Boot Guard and TXT on coreboot-based platforms. Our development platform was the new OCP Deltalake from Facebook, which has been presented on the OCP Virtual Summit 2020.

Intel Converged Boot Guard and TXT

Intel introduced CBnT as an addition to the already present Intel Trusted Execution Technology and Intel Boot Guard. The plan was to merge both technologies together into one - namely CBnT. Intel does rely on so called Authenticated Code Modules (ACMs) which get executed by the CPU, and are signed by Intel - so that only Intel-signed ACMs can run a very specific set of CPUs. Prior to CBnT, there have been two seperate ACMs for either Bootguard or TXT - However with CBnT Intel merged both ACMs together such that there are now two types of ACMs; The Startup ACM and the SINIT ACM. The Startup ACM does establish the Static Root of Trust, where as the SINIT ACM does empower the Dynamic Root of Trust. In the previous version of Intel TXT, the Trust Anchor was the TPM. Intel CBnT moves that Trust Anchor into the Intel Management Engine. In addition, the policies have been defined in the non-volatile part of the TPM, the NVRAM. With Intel CBnT, Intel introduced changes such that you now have two structures to configure Intel CBnT. The first structure is the Key Manifest (KM). The KM closes the gap between Intel Management Engine and Firmware. On one hand, the KM contains a hash of the public key with which the second structure, the Boot Policy Manifest (BPM) has been signed - to validate that only BPMs signed with a certain private key are deemed to be valid. On the other hand, the KM itself is also signed - and the hash of the public key of the KM signing key is burned into the ME.
Converged Security (CBnT) coreboot support and tooling
Chain of Trust for Intel CBnT

CBnT-Prov Tooling

The newly introduced cbnt-prov tooling can be used to generate and sign the Key Manifest and the Boot Policy Manifest for Intel CBnT. It can also generate the keys needed for signing the manifests, and stitching them back into the firmware. The cbnt-prov tooling is firmware agnostic - it does not care if you use UEFI or open-source firmware like coreboot. It works with any firmware as long as the firmware respects the Intel CBnT and FIT specification. Extensive Documentation on how to build and use the cbnt-prov tooling can be found here. We do also landed a couple of patches to integrate this tooling directly into the coreboot toolchain, such that one can seamlessly build coreboot with CBnT support enabled - all needed structure can automatically be generated through buildchain - or optionally one can hand in just the binaries. This enables customer to take full control over the CBnT Provisioning process with open-source code - transparent and open.

coreboot Support

As mentioned earlier, we did land a couple of patches to integrate CBnT into coreboot - not only the CBnT Technology itself, but also the KM and BPM can be generated through the toolchain.
Converged Security (CBnT) coreboot support and tooling
CBnT Support in coreboot
In the coreboot > Security menu, one can not enable Intel CBnT Support. Once enabled, the user needs to point to the Startup- and S-ACM location, and needs to define if the KM, BPM should be either generated, optionally signed, or if the user hands in binaries.
Converged Security (CBnT) coreboot support and tooling
CBnT Tooling support in coreboot
Once the user defined KM and BPM options, the coreboot toolchain will build, sign and integrate the KM and BPM structures automatically into the coreboot firmware image - easy!

Why Open-Source Tooling Matters

The Converged Bootguard and TXT (CBnT) Technology is the backbone of your firmware security. It secures what is under your control - the first code that runs on the platform, the so called Initial Boot Block (IBB) and builds up the chain-of-trust for your hardware. Based on the measurements takes by your firmware and the CBnT technology, your security models decides if the machine is trusted or not. And the structures defining those parameters are placed in the KM and the BPM. A wrongly configured KM and BPM can either brick your machine so that your infrastructure does not boot up anymore - or even worse can introduce security flaws which open up an attack window. Thus the owner of the machine should have full control of what should be configured on the machine and even more important have the ability to check what has been configured - to verify the correctness of the applied configuration. These goals can only be achieved through open-source tooling - to give the owner of the hardware full transparency on what has been configured, and the ability to configure the machine to their needs.

Get Involved

The tooling can be found in our repo here: https://github.com/9elements/converged-security-suite/ We are currently working on a CBnT Testsuite and BootGuard Provisioning - so expect more updates here soon! Do you need help with your firmware project? Or want to talk about firmware security with us? Contact us!

A short journey to x86 long mode in coreboot on recent Intel platforms

While it was difficult to add initial x86_64 support in coreboot, as described in my last blog article how-to-not-add-x86_64-support-to-coreboot it was way easier on real hardware. During the OSFC we did a small hackathon at 9elements and got x86_64 working in coreboot on recent Intel platforms. If you want to test new code that deals with low level stuff like enabling x86_64 mode in assembly, it's always good to test it on qemu using KVM. It runs the code in ring 0 instead of emulating every single instruction and thus is very close to bare metal machines.

$ qemu-system-x86_64 -M q35 -accel kvm -bios build/coreboot.rom

But, running coreboot's x86_64 code on KVM gave more magic errors than you could find in books about some famous magician with a scarf on his forehead. To summarize:
  • On recent AMD platforms it stops after entering x86_64 long mode.
  • On older Intel platforms everything works.
  • On recent Intel platforms after entering long mode every instruction causes a fault, and thus the instruction is emulated by the kernel, which doesn't handle FPU instruction that well...
  • On recent Intel platforms the MMU aborts walking page tables and returns the data within the page table itself when looking up some virtual addresses...
Tests on real hardware, in this case the X11SSH-TF showed none of the problems above. After fixing integer-void pointer conversion (CB:48166) , (CB:48167) , (CB:48177) and adding assembly code to enable long mode in Cache-As-RAM (CB:48170) it booted straight to romstage. first boot console log:

coreboot-4.13-241-g52ab788549-dirty Tue Dec 1 18:23:08 UTC 2020 bootblock starting (log level: 7)... CPU: Intel(R) Xeon(R) CPU E3-1240 v6 @ 3.70GHz CPU: ID 906e9, Kabylake H B0, ucode: 000000d5 CPU: AES supported, TXT supported, VT supported MCH: device id 5918 (rev 05) is Kabylake DT 2 PCH: device id a149 (rev 31) is Skylake PCH-H C236 IGD: device id ffff (rev ff) is Unknown FMAP: Found "FLASH" version 1.1 at 0xb10000. FMAP: base = 0xff000000 size = 0x1000000 #areas = 4 FMAP: area COREBOOT found @ b10200 (5176832 bytes) CBFS: Found 'fallback/romstage' @0x80 size 0xe334 BS: bootblock times (exec / console): total (unknown) / 53 ms

coreboot-4.13-241-g52ab788549-dirty Tue Dec 1 18:23:08 UTC 2020 romstage starting (log level: 7)... pm1_sts: 0900 pm1_en: 4000 pm1_cnt: 00000000 gpe0_sts[0]: 00000000 gpe0_en[0]: 00000000 gpe0_sts[1]: 00000000 gpe0_en[1]: 00000000 gpe0_sts[2]: 00000000 gpe0_en[2]: 00000000 gpe0_sts[3]: 00000000 gpe0_en[3]: 00000000 TCO_STS: 0000 0000 GEN_PMCON: e0810200 000018c8 GBLRST_CAUSE: 00000002 00000000 prev_sleep_state 0 FMAP: area COREBOOT found @ b10200 (5176832 bytes) CBFS: Found 'fspm.bin' @0x5fdc0 size 0x63000 POST: 0x34 FMAP: area RW_MRC_CACHE found @ b00000 (65536 bytes) MRC: no data in 'RW_MRC_CACHE' No memory dimm at address A2 No memory dimm at address A4 POST: 0x36 POST: 0x92 ghost It hung at entering FSP-M, which as it's a binary blob, wasn't automatically recompiled to x86_64. A wrapper (CB:48175) , written in assembly, fixed the problem by falling back to x86_32 when calling into FSP. The wrapper will automatically switch back into x86_64 mode when the function returns. This is slow, but as we don't have proper blobs there's no other way around it. memory init console log:

coreboot-4.13-242-g04129be978-dirty Tue Dec 1 18:42:20 UTC 2020 romstage starting (log level: 7)... pm1_sts: 0900 pm1_en: 0000 pm1_cnt: 00000000 gpe0_sts[0]: 00000000 gpe0_en[0]: 00000000 gpe0_sts[1]: 00000000 gpe0_en[1]: 00000000 gpe0_sts[2]: 00000000 gpe0_en[2]: 00000000 gpe0_sts[3]: 00000000 gpe0_en[3]: 00000000 TCO_STS: 0000 0000 GEN_PMCON: e0810200 000018c8 GBLRST_CAUSE: 00000002 00000000 prev_sleep_state 0 FMAP: area COREBOOT found @ b10200 (5176832 bytes) CBFS: Found 'fspm.bin' @0x5fdc0 size 0x63000 POST: 0x34 FMAP: area RW_MRC_CACHE found @ b00000 (65536 bytes) MRC: no data in 'RW_MRC_CACHE' No memory dimm at address A2 No memory dimm at address A4 POST: 0x36 POST: 0x92 POST: 0x98 FspMemoryInit returned 0x80000002 POST: 0xe3 FspMemoryInit returned an error!

The FSP was now able to run, but it returned an error Invalid parameter, which was due to the fact that FSP's config structures contained void pointers, which on x86_64 have a different size and doesn't match what FSP expects. Fixing those headers is an ongoing tasks, but was hacked around. SMM stack trash console log:

IOAPIC: Initializing IOAPIC at 0xfec00000 IOAPIC: Bootstrap Processor Local APIC = 0x00 IOAPIC: ID = 0x02 PCI: 00:1f.0 init finished in 9 msecs POST: 0x75 POST: 0x75 PCI: 00:1f.2 init RTC Init Set power on after power failure. Disabling ACPI via APMC.

coreboot-4.13-241-g52ab788549-dirty Tue Dec 1 18:23:08 UTC 2020 smm starting (log level: 7)... SMI_STS: PM1 APM SMI#: ACPI disabled. canary 0xcdcdcdcd7f9ff800 != 0x7f9ff800 SMM Handler caused a stack overflow ghostFinally it booted into SMM, but crashed due to stack trashing. That turned out to be a false positive, as the stack canary is the size of a void pointer and is written in x86_32 assembly, but checked in x86_64 C code and thus failed. Writing 4 additional bytes in assembly code fixed the stack canary check and it finally booted.(CB:48215) patch:

/* Write canary to the bottom of the stack */ movl   stack_size, %eax subl   %eax, %ebx /* %ebx(stack_top) - size = %ebx(stack_bottom) */ movl   %ebx, (%ebx) + #if ENV_X86_64 +   movl   $0, 4(%ebx) + #endif

Summarizing it took about a day to add x86_64 support and half of the code needed to be written in assembly code. With those patches in place it should be easier to port additional platforms to x86_64, reducing the time to a few hours. I invite everyone to play with the changes, hack the code and improve it to make this open source project even more awesome.

Announcing coreboot 4.13

coreboot 4.13 was released on November 20th, 2020.

Since 4.12 there were 4200 new commits by over 234 developers. Of these, about 72 contributed to coreboot for the first time.

Thank you to all developers who again helped made coreboot better than ever, and a big welcome to our new contributors!

New mainboards

  • Acer G43T-AM3
  • AMD Cereme
  • Asus A88XM-E FM2+
  • Biostar TH61-ITX
  • BostenTech GBYT4
  • Clevo L140CU/L141CU
  • Dell OptiPlex 9010
  • Example Min86 (fake board)
  • Google Ambassador
  • Google Asurada
  • Google Berknip
  • Google Boldar
  • Google Boten
  • Google Burnet
  • Google Cerise
  • Google Coachz
  • Google Dalboz
  • Google Dauntless
  • Google Delbin
  • Google Dirinboz
  • Google Dooly
  • Google Drawcia
  • Google Eldrid
  • Google Elemi
  • Google Esche
  • Google Ezkinil
  • Google Faffy
  • Google Fennel
  • Google Genesis
  • Google Hayato
  • Google Lantis
  • Google Lindar
  • Google Madoo
  • Google Magolor
  • Google Metaknight
  • Google Morphius
  • Google Noibat
  • Google Pompom
  • Google Shuboz
  • Google Stern
  • Google Terrador
  • Google Todor
  • Google Trembyle
  • Google Vilboz
  • Google Voema
  • Google Volteer2
  • Google Voxel
  • Google Willow
  • Google Woomax
  • Google Wyvern
  • HP EliteBook 2560p
  • HP EliteBook Folio 9480m
  • HP ProBook 6360b
  • Intel Alderlake-P RVP
  • Kontron COMe-bSL6
  • Lenovo ThinkPad X230s
  • Open Compute Project DeltaLake
  • Prodrive Hermes
  • Purism Librem Mini
  • Purism Librem Mini v2
  • Siemens Chili
  • Supermicro X11SSH-F
  • System76 lemp9

Removed mainboards

  • Google Cheza
  • Google DragonEgg
  • Google Ripto
  • Google Sushi
  • Open Compute Project SonoraPass

Significant changes

Native refcode implementation for Bay Trail

Bay Trail no longer needs a refcode binary to function properly. The refcode was reimplemented as coreboot code, which should be functionally equivalent. Thus, coreboot only needs to run the MRC.bin to successfully boot Bay Trail.

Unusual config files to build test more code

There’s some new highly-unusual config files, whose only purpose is to coerce Jenkins into build-testing several disabled-by-default coreboot config options. This prevents them from silently decaying over time because of build failures.

Initial support for Intel Trusted eXecution Technology

coreboot now supports enabling Intel TXT. Though it’s not feature-complete yet, the code allows successfully launching tboot, a Measured Launch Environment. It was tested on Haswell using an Asrock B85M Pro4 mainboard with TPM 2.0 on LPC. Though support for other platforms is still not ready, it is being worked on. The Haswell MRC.bin needs to be patched so as to enable DPR. Given that the MRC binary cannot be redistributed, the best long-term solution is to replace it.

Hidden PCI devices

This new functionality takes advantage of the existing ‘hidden’ keyword in the devicetree. Since no existing boards were using the keyword, its usage was repurposed to make dealing with some unique PCI devices easier. The particular case here is Intel’s PMC (Power Management Controller). During the FSP-S run, the PMC device is made hidden, meaning that its config space looks as if there is no device there (Vendor ID reads as 0xFFFF_FFFF). However, the device does have fixed resources, both MMIO and I/O. These were previously recorded in different places (MMIO was typically an SA fixed resource, and I/O was treated as an LPC resource). With this change, when a device in the tree is marked as ‘hidden’, it is not probed (pci_probe_dev()) but rather assumed to exist so that its resources can be placed in a more natural location. This also adds the ability for the device to participate in SSDT generation.

Tools for generating SPDs for LP4x memory on TGL and JSL

A set of new tools gen_spd.go and gen_part_id.go are added to automate the process of generating SPDs for LP4x memory and assigning hardware strap IDs for memory parts used on TGL and JSL based boards. The SPD data obtained from memory part vendors has to be massaged to format it correctly as per JEDEC and Intel MRC expectations. These tools take a list of memory parts describing their physical attributes as per their datasheet and convert those attributes into SPD files for the platforms. More details about the tools are added in README.md.

New version of SMM loader

A new version of the SMM loader which accommodates platforms with over 32 CPU threads. The existing version of SMM loader uses a 64K code/data segment and only a limited number of CPU threads can fit into one segment (because of save state, STM, other features, etc). This loader extends beyond the 64K segment to accommodate additional CPUs and in theory allows as many CPU threads as possible limited only by SMRAM space and not by 64K. By default this loader version is disabled. Please see cpu/x86/Kconfig for more info.

Address Sanitizer

coreboot now has an in-built Address Sanitizer, a runtime memory debugger designed to find out-of-bounds access and use-after-scope bugs. It is made available on all x86 platforms in ramstage and on QEMU i440fx, Intel Apollo Lake, and Haswell in romstage. Further, it can be enabled in romstage on other x86 platforms as well. Refer ASan documentation for more info.

Initial support for x86_64

The x86_64 code support has been revived and enabled for QEMU. While it started as PoC and the only supported platform is an emulator, there’s interest in enabling additional platforms. It would allow to access more than 4GiB of memory at runtime and possibly brings optimised code for faster execution times. It still needs changes in assembly, fixed integer to pointer conversions in C, wrappers for blobs, support for running Option ROMs, among other things.

Preparations to minimize enabling PCI bus mastering

For security reasons, bus mastering should be enabled as late as possible. In coreboot, it’s usually not necessary and payloads should only enable it for devices they use. Since not all payloads enable bus mastering properly yet, some Kconfig options were added as an intermediate step to give some sort of “backwards compatibility”, which allow enabling or disabling bus mastering by groups.

Currently available groups are:

  • PCI bridges
  • Any devices

For now, “Any devices” is enabled by default to keep the traditional behaviour, which also includes all other options. This is currently necessary, for instance, for libpayload-based payloads as the drivers don’t enable bus mastering for PCI bridges.

Exceptional cases, that may still need early bus master enabling in the future, should get their own per-reason Kconfig option. Ideally before the next release.

Early runtime configurability of the console log level

Traditionally, we didn’t allow the log level of the romstage console to be changed at runtime (e.g. via get_option()). It turned out that the technical constraints for this (no global variables in romstage) vanished long ago, though. The new behaviour is to query get_option() now from the second stage that uses the console on. In other words, if the bootblock already enables the console, the romstage log level can be changed via get_option(). Keeping the log level of the first console static ensures that we can see console output even if there’s a bug in the more involved code to query options.

Resource allocator v4

A new revision of resource allocator v4 is now added to coreboot that supports mutiple ranges for allocating resources. Unlike the previous allocator (v3), it does not use the topmost available window for allocation. Instead, it uses the first window within the address space that is available and satisfies the resource request. This allows utilization of the entire available address space and also allows allocation above the 4G boundary. The old resource allocator v3 is still retained for some AMD platforms that do not conform to the requirements of the allocator.

Deprecations

PCI bus master configuration options

In order to minimize the usage of PCI bus mastering, the options we introduced in this release will be dropped in a future release again. For more details, please see Preparations to minimize enabling PCI bus mastering.

Resource allocator v3

Resource allocator v3 is retained in coreboot tree because the following platforms do not conform to the requirements of the resource allocation i.e. not all the fixed resources of the platform are provided during the read_resources() operation:

  • northbridge/amd/pi/00630F01
  • northbridge/amd/pi/00730F01
  • northbridge/amd/pi/00660F01
  • northbridge/amd/agesa/family14
  • northbridge/amd/agesa/family15tn
  • northbridge/amd/agesa/family16kb

In order to have a single unified allocator in coreboot, this notice is being added to ensure that the platforms listed above are fixed before the next release. If there is interest in maintaining support for these platforms beyond the next release, please ensure that the platforms are fixed to conform to the expectations of resource allocation.

[GSoC] libgfxinit: Add support for Bay Trail

Hello everyone. I’ve been working on adding Bay Trail support to libgfxinit as a GSoC project. Yes, as I don’t usually talk much outside of IRC and Gerrit, I would imagine this post would come up as a surprise to most people. Despite the journey being way more difficult than initially foreseen, I eventually managed to get most of what I could test on Bay Trail working, with next to no spaghetti-looking code.

The commits adding Bay Trail support to libgfxinit and integration with coreboot can be retrieved with this Gerrit query. Additionally, the coreboot port for the Asrock Q1900M mainboard used for testing can be found on this Gerrit change.

I ran into several problems while working on this GSoC project, and submitted various fixes and improvements. Links to these commits can be found in later sections. Strictly speaking, these commits are not directly related to this GSoC project, but they spurred when working on GSoC.

Unfortunately, I ran into multiple setbacks, which precluded me from completing everything I had originally planned within the GSoC timeframe:

  • Since I only managed to fix some bugs last-minute, the code has not been formally verified yet. Nevertheless, formally verifying the code before it works and has been reviewed is rather pointless, since it needs to be verified again after amending it.
  • DisplayPort and integrated panel support could not be tested due to inaccessible hardware. I had to take a plane from the university campus to home on June and it was impossible to squeeze both the Asrock Q1900M and the Asus X551MA in my luggage. I decided to bring the Asrock Q1900M, as it is more compact and easier to work with.
  • There was no time left to work on Braswell support. While Bay Trail and Braswell are somewhat related, there are many differences regarding the undocumented parts of the hardware, and there’s even less documentation.

Undeterred by any misfortunes, I am going to finish what I started, come hell or high water.

Project details

libgfxinit is a graphics initialization (aka modesetting) library for embedded environments. It currently supports only Intel hardware, more specifically the Intel Core processor line. It can query and set up most kinds of displays based on their EDID information. You can, however, also specify particular mode lines.

Support for the Intel Bay Trail platform is was missing in libgfxinit. The code hasn’t landed upstream yet, so one would need to fetch it from Gerrit in order to use it. This involves fetching the libgfxinit patches first, using the checkout download option on CB:42359 (the topmost change), and then cherry-picking CB:44071 and CB:44072 into coreboot. CB:44938 and CB:39658 show how to enable libgfxinit for Bay Trail mainboards. Since the available video ports is mainboard-specific, gma-mainboard.ads needs to be adjusted accordingly.

Trials and Tribulations

The hardware is cursed

Getting software to work usually takes some testing, but when said software interacts with hardware, testing becomes essential. And when said hardware is largely undocumented, testing is pretty much the only option. The Display chapters of the graphics programming manuals for these platforms lack the information that matters for libgfxinit. Even the Bay Trail documentation turned out to be incomplete, especially regarding the display PHY and PLL registers. When working on libgfxinit, I soon got Bay Trail to show something on a monitor. However, making that work reliably took much longer than I expected. This was mainly because I needed to spend at least a day or two without looking at the code to see what was wrong with it.

Said PHY and PLL registers are hidden within IOSF-SB, a sideband interconnect network accessed through a mailbox-style interface. To access a register, not only does one need to read or write the register contents, but also needs to program the destination port (address of the hardware block), opcode (which type of read or write) and register offset, and then poll a busy bit until the operation is complete. Of course, this register access mechanism is not described in the graphics documentation, so the only references are existing graphics drivers. Reading someone else’s code in order to understand what documentation should say is, at best, downright painful.

After I managed to get something to show on the screen, I noticed that this would only work on very specific system states. In addition, manually (using the intel_reg utility) writing several undocumented registers before running gfx_test would sometimes help. I eventually figured out that most of the accesses to undocumented registers did not have any effect, because of a blunder in the IOSF accessor library I wrote: I messed up the bitfields when assembling the request register (contains the target port, opcode and some always-one bits), so the accesses would often end up going to the wrong port.

There’s always more bugs

The Bay Trail code in coreboot was only used by a single mainboard: the Google Rambi chromebook/chromebox/chromebase family. Memory initialization is done by a binary-only executable, which contains Intel’s MRC (Memory Reference Code). This binary is simply called “MRC binary” or mrc.bin (the file’s name). However, it is actually an ELF binary, and the Makefile in coreboot will place it at a different offset depending on the file extension. So, Bay Trail has a mrc.elf instead: using the mrc.bin name will place the binary at the wrong offset, and won’t work.

Once this mystery was solved, the MRC on the Asrock Q1900M would not detect any DIMMs. Turns out SMBus support in MRC is broken, so one needs to read the SPD contents into a buffer, and then pass that buffer to the MRC. CB:44092 takes care of that.

Even then, MRC would still refuse to work on the Asrock Q1900M. After some digging, it is because it checks the memory type in the SPD, and bails out if they are not SO-DIMMs or do not support 1.35V operation. The Asrock Q1900M uses full-size desktop UDIMMs, which may not always support 1.35V operation. CB:39568 patches the necessary values in the SPD buffer so that MRC will function as intended.

Externally-induced translocation

I don’t mind running errands or going out in general. However, I do utterly despise having to move and live elsewhere: I have to pack my computers and parts, and I have lots of them. I live in an archipelago, I don’t have my own house nor car yet, and my parents’ home and the university campus aren’t on the same island. Dad usually comes with his car at the start/end of the school year, as I need to move lots of stuff. Oh, and my parents are divorced, so my sister and I go back and forth when not abroad.

Because of the coronavirus outbreak, in-person lessons were suspended for the rest of the academic year. Most people living in the university campus (there’s a students’ residence in there) went back home almost immediately. I didn’t, because I didn’t feel like taking a plane amidst the outbreak and preferred to stay in my cozy server dorm room. However, as there were no more in-person lessons, the residence had to close, and I eventually had to leave. Moreover, Dad couldn’t come this time because he was overwhelmed by work (he was unable to work during lockdown, so everything piled up until the lockdown ended). So, I had to take a plane and leave most of my stuff in the residence, including all of my monitors with digital inputs and one of my two Bay Trail machines, which I had planned on using for GSoC.

And if that wasn’t enough, I’ve had to pack my things again, every week. This means each week only had six useful days, at best. This, plus everything else going on at home, quickly burned me up. It reached a point where I couldn’t bear any longer and had to take a two-week break from coreboot development.

Conclusion

Although there were many unforeseen hurdles and problems around every corner, I would still call this a huge success. Just like university assignments, it has been rushed down to the last minute.

[GSoC] Address Sanitizer, Wrap-up

Hello everyone. The coding period for GSoC 2020 is now officially over and it’s time for the final evaluation. I’ll use this blog post to summarize the project details, illustrate the instructions to use ASan, and discuss some ideas on what can be done further to enhance this feature.

You can find the complete list of commits I made during GSoC with this Gerrit query.


Project details

Memory safety is hard to achieve. We, as humans, are bound to make mistakes in our code. While it may be straightforward to detect memory corruption bugs in few lines of code, it becomes quite challenging to find those bugs in a massive code. In such cases, ‘Address Sanitizer’ may prove to be useful and could help save time.

Address Sanitizer, also known as ASan, is a runtime memory debugger designed to find out-of-bounds accesses and use-after-scope bugs. Over the past couple of weeks, I’ve been working to add support for ASan to coreboot. You can read my previous blog posts (Part 1, Part 2, and Part 3) to see my progress throughout the summer.

Here is a description of the components included in the project:

GCC patch

The design of ASan in coreboot is based on its implementation in Linux kernel, also known as Kernel Address Sanitizer (KASAN). However, we can’t directly port the code from Linux.

Unlike the Linux kernel which has a static shadow region layout, we have multiple stages in coreboot and thus require a different shadow offset address. Unfortunately, GCC currently only supports adding a static shadow offset at compile time using -fasan-shadow-offset flag. Therefore the foremost task was to add support for dynamic shadow offset to GCC.

We enabled GCC to determine the shadow offset address at runtime using a callback function named __asan_shadow_offset. This supersedes the need to specify this address at compile time. GCC then makes use of this shadow offset in its internal mem_to_shadow translation function to poison stack variables’ redzones.

The patch further allowed us to place the shadow region in a separate linker section. This ensured if a platform didn’t have enough memory space to hold the shadow buffer, the build would fail.

The way the patch was introduced to GCC’s code base ensures that if
one compiles a piece of code with the new switch enabled i.e. --param asan-use-shadow-offset-callback=1 but has not applied the patch itself to GCC, the compiler will throw the following error because the newly introduced switch is unknown for an out of box GCC: invalid --param name 'asan-use-shadow-offset-callback‘.

I believe this patch might also be useful to the developers who contribute to other open-source projects. Hence, I’ve put this patch on GCC’s mailing list and asked GCC’s developers to include this feature in their upcoming release.

ASan in ramstage

Since ramstage uses DRAM, regardless of the platform, it should always have enough room in the memory to hold the shadow buffer. Therefore, I began by adding support for ASan in ramstage on x86 architecture.

To reserve space in memory for the shadow region, I created a separate linker section and named it asan_shadow. Here, instead of allocating shadow memory for the whole memory region which includes drivers and hardware mapped addresses, I only defined shadow region for the data and heap sections.

Then I started porting KASAN library functions, tweaking them to make them suitable for coreboot.

The next task was to initialize the shadow memory at runtime. I created a function called asan_init which unpoisons i.e. sets the shadow memory corresponding to the addresses in the data and heap sections to zero.

In the case of global variables, instead of poisoning the redzones directly, the compiler inserts constructors invoking the library function named __asan_register_globals to populate the relevant shadow memory regions. So, I wrote a function named asan_ctors which calls these constructors at runtime and added a call to this function to asan_init().

After doing some tests, I realized that compiler’s ASan instrumentation cannot insert asan_load or asan_store state checks in the memory functions like memset, memmove and memcpy as they are written in assembly. So, I added manual checks using the library function named check_memory_region for both source and destination pointers.

ASan in romstage

Once I had ASan in ramstage working as expected, I started adding support for ASan to romstage.

It was challenging because of two reasons. First, even within the same architecture, the size of L1 cache varies across the platforms from 32KB in Braswell to 80KB in Ice Lake and thus we can’t enable ASan in romstage for all platforms by doing tests on a handful of devices. Second, the size of a cache is very small compared to RAM making it difficult to fit asan_shadow section in the limited memory.

Thankfully, the latter issue, to a large extent, was solved by our GCC patch which allowed us to append asan_shadow section to the region already occupied by the coreboot program and make efficient use of limited memory.

Now to resolve the first issue, I introduced a Kconfig option called HAVE_ASAN_IN_ROMSTAGE to denote if a particular platform supports ASan in romstage. This allowed us to enable ASan in romstage only for the platforms which have been tested.

Based on the hardware available with me and my mentor, I enabled ASan in romstage for Haswell and Apollo Lake platforms, apart from QEMU.


Project usage

Instructions for how to use ASan are included in ASan documentation. I’ll restate them with an example here.

Suppose there is a stack-out-of-bounds error in cbfs.c that we aren’t aware of. Let’s see if ASan can help us detect it.

int cbfs_boot_region_device(struct region_device *rdev)
{
	int stack_array[5], i;
	boot_device_init();

	for (i = 10; i > 0; i--)
		stack_array[i] = i;

	return vboot_locate_cbfs(rdev) &&
	       fmap_locate_area_as_rdev("COREBOOT", rdev);
}

First, we have to enable ASan from the configuration menu. Just select Address sanitizer support from General setup menu. Now, build coreboot and run the image.

ASan will report the following error in the console log:

ASan: stack-out-of-bounds in 0x7f7432fd
Write of 4 bytes at addr 0x7f7c2ac8

Here 0x7f7432fd is the address of the last good instruction before the bad access. In coreboot, stages are relocated. So, we have to normalize this address to find the instruction which causes this error.

For this, let’s subtract the start address of the stage i.e. 0x7f72c000. The difference we get is 0x000172fd. As per our console log, this error happened in the ramstage. So, let’s look at the sections headers of ramstage from ramstage.debug.

 $ objdump -h build/cbfs/fallback/ramstage.debug

build/cbfs/fallback/ramstage.debug:     file format elf32-i386

Sections:
Idx Name          Size      VMA       LMA       File off  Algn
  0 .text         00070b20  00e00000  00e00000  00001000  2**12
                  CONTENTS, ALLOC, LOAD, RELOC, READONLY, CODE
  1 .ctors        0000036c  00e70b20  00e70b20  00071b20  2**2
                  CONTENTS, ALLOC, LOAD, RELOC, DATA
  2 .data         0001c8f4  00e70e8c  00e70e8c  00071e8c  2**2
                  CONTENTS, ALLOC, LOAD, RELOC, DATA
  3 .bss          00012940  00e8d780  00e8d780  0008e780  2**7
                  ALLOC
  4 .heap         00004000  00ea00c0  00ea00c0  0008e780  2**0
                  ALLOC

Here the offset of the text segment is 0x00e00000. Let’s add this offset to the difference we calculated earlier. The resultant address is 0x00e172fd.

Next, we read the contents of the symbol table and search for a function having an address closest to 0x00e172fd.

 $ nm -n build/cbfs/fallback/ramstage.debug
........
........
00e17116 t _GLOBAL__sub_I_65535_1_gfx_get_init_done
00e17129 t tohex16
00e171db T cbfs_load_and_decompress
00e1729b T cbfs_boot_region_device
00e17387 T cbfs_boot_locate
00e1740d T cbfs_boot_map_with_leak
00e174ef T cbfs_boot_map_optionrom
........

The symbol having an address closest to 0x00e172fd is cbfs_boot_region_device and its address is 0x00e1729b. This is the function in which our memory bug is present.

Now, as we know the affected function, we read the assembly contents of cbfs_boot_region_device which is present in cbfs.o to find the faulty instruction.

 $ objdump -d build/ramstage/lib/cbfs.o
........
........
  51:   e8 fc ff ff ff          call   52 <cbfs_boot_region_device+0x52>
  56:   83 ec 0c                sub    $0xc,%esp
  59:   57                      push   %edi
  5a:   83 ef 04                sub    $0x4,%edi
  5d:   e8 fc ff ff ff          call   5e <cbfs_boot_region_device+0x5e>
  62:   83 c4 10                add    $0x10,%esp
  65:   89 5f 04                mov    %ebx,0x4(%edi)
  68:   4b                      dec    %ebx
  69:   75 eb                   jne    56 <cbfs_boot_region_device+0x56>
........

Let’s look for the last good instruction before the error happens. It would be the one present at the offset 62 (0x00e172fd0x00e1729b).

The instruction is add $0x10,%esp and it corresponds to for (i = 10; i > 0; i--) in our code. It means the very next instruction i.e. mov %ebx,0x4(%edi) is the one that causes the error. Now, if you look at C code of cbfs_boot_region_device() again, you’ll find that this instruction corresponds to stack_array[i] = i.

Voilà! we just caught the memory bug using ASan.


Future work

While my work for GSoC 2020 is complete, I think the following extensions would be useful for this project:

Heap buffer overflow

Presently, ASan doesn’t detect out-of-bounds accesses for the objects defined in heap. Fortunately, the support for these types of memory bugs can be added easily.

We just have to make sure that whenever some block of memory is allocated in the heap, the surrounding areas (redzones) are poisoned. Correspondingly, these redzones should be unpoisoned when the memory block is de-allocated.

Post-processing script

Unlike Linux, coreboot doesn’t have %pS printk format to dereference a pointer to its symbolic name. Therefore, we normalize the pointer address manually as I showed above to determine the name of the affected function and further use it to find the instruction which causes the error.

A custom script can be written to automate this process.

Support for other platforms and architectures

Jenkins builder built successfully for all x86 boards except for the ones that hold either Braswell SoC or i440bx northbridge where the cache area got full and thus couldn’t fit the asan_shadow section. It shows that support for ASan in romstage can be easily added to most x86 platforms. We just have to test them by selecting HAVE_ASAN_IN_ROMSTAGE option and resolve the compilation errors if any.

Enabling ASan in ramstage on other architectures like ARM or RISC-V should be easy too. We just have to make sure the shadow memory is initialized as early as possible when ramstage is loaded. This can be done by making a function call to asan_init() at the appropriate place.

Similarly, ASan in romstage can be enabled for other architectures. I have mentioned some key points in ASan documentation which could be used by someone who might be interested in doing so.

For the platforms that don’t have enough space in the cache to hold the asan_shadow section, we have to come up with a new translation function that uses a much compact shadow memory. Since the stack buffers are protected by the compiler, we’ll also have to create another GCC patch forcing it to use the new translation function for this particular platform.


Acknowledgement

I’d like to thank my mentor Werner Zeh for his continued assistance during the past 13 weeks. This project certainly wouldn’t have been possible without his valuable suggestions and the knowledge he shared. I’d also like to thank Patrick Georgi for helping me with the work authorization initially and later supervising my work during the time when Werner was on vacation.

Further, I am grateful to every member of the community for assisting me whenever I got stuck, reviewing my code, reading my blogs, and sharing their feedback.


It has been an amazing journey and I look forward to contributing to coreboot in the future.

[GSoC] Address Sanitizer, Part 3

Hello again! The third and final phase of GSoC is coming to an end and I’m glad that I made it this far. In this blog post, I’d like to outline the work done in the last two weeks.


Memory functions

Compiler’s ASan instrumentation cannot insert asan_load or asan_store memory checks in the memory functions like memset, memmove and memcpy. This is because these functions are written in assembly.

While Linux kernel replaces these functions with their variants which are written in C, I took a different approach.

In coreboot, the assembly instructions for these memory functions are embedded into C code using GNU’s asm extension. This provided me with an opportunity to use the ASan library function named check_memory_region to manually check the memory state before performing each of these operations. At the start of each function, I added the following code snippet:

#if (ENV_ROMSTAGE && CONFIG(ASAN_IN_ROMSTAGE)) || (ENV_RAMSTAGE && CONFIG(ASAN_IN_RAMSTAGE))
	check_memory_region((unsigned long)src, n, false, _RET_IP_);
	check_memory_region((unsigned long)dest, n, true, _RET_IP_);
#endif

This way neither I had to fiddle with the assembly instructions nor I had to replace these functions with their C variants which might have caused some performance issues. [CB:44307]


Documentation

Since I finished a little early with what I had proposed to deliver, Werner suggested that I should write documentation on ASan and I am happy that he did. When I read the intro of the documentation guidelines, I realized how a feature as significant as ASan might go unnoticed and unused by many if it lacks proper documentation.

In ASan documentation, I have tried my best to answer questions like how to use ASan, what kind of bugs can be detected, what devices are currently supported, and how ASan support can be added to other architectures like ARM or RISC-V.

The documentation is not final yet and I’d really appreciate inputs from the community. So, please give it a read and share your feedback. [CB:44814]


In the end, I’d like to announce that ASan patches have been merged into the coreboot source tree. You can go ahead and make use of this debugging tool to look for memory corruption bugs in your code.

[ASan patches]