Coreboot on the ASRock E3C246D4I

A new toy to play with OpenBMC

Coreboot on the ASRock E3C246D4II wanted to play around with OpenBMC on a physical board and this article led me to the ASRock E3C246D4I. It's a not overly expensive Intel Coffee Lake board featuring an Aspeed AST2500 BMC. So the first thing I did was to compile OpenBMC. My computer was in for a quite a chore there. It needed to download 11G of sources and compile those. Needless to say this takes a long time on a notebook computer and is best done overnight. I flashed the image via the SPI header next to the BMC flash. I used some mini crocodile clips to do this at first. Coreboot on the ASRock E3C246D4I To set up a nice way to play around with OpenBMC I attempted to hook up an EM100Pro, a flash emulator via the header. This did not seem to work. I'm not sure what was going on here. It looked like the real flash chip was not reliably put on HOLD by the EM100. When tracing the SPI commands with the EM100 0xff was the response to all the (fast)read commands. I guess a fast SPI programmer will do for now in the future. OpenBMC makes updates quite easy though: just copy the image-bmc to /run/initramfs/ and rebooting will launch an update which takes a minute or so (faster than my external programmer). So what can OpenBMC do on that board? Not much, it seemed at first. Powering the board on and off did not work and not much else either. The author of the port, Zev Zweiss, helped me a lot to get things working though. A bit of manual gpio magic and powering on and off works well. So the upstream code needs a bit of polish to get working but using the branch from the original author of this board port fared much better: sensors and power control work fine. Fan control is not implemented though, but I might look into that. Max fan speeds might be ok in a datacenter but not in my home office for sure!

Coreboot

The host flash is muxed to the BMC SPI pins so the BMC can easily (re)flash the host firmware (and is even faster at this than the host PCH due to the high SPI frequency the BMC can use). To get that working a few things needed to be done on the BMC. The flash is hooked up to the BMC SPI1 master bus which needs to be declared in the FDT. U-boot needs to set the SPI1 controller in master mode. The mux is controlled via a GPIO. 2 other GPIOs also need to be configured such that the ME on the PCH does not attempt to mess with the firmware while we're flashing (ME_RECOVERY pins). A flash controlled by a BMC is a very comfortable situation for a coreboot developper, who needs to do a dozen reflashes an hour, so hacking on coreboot with this device was a bliss (as soon as I got the uart console working). I don't have the schematics to this board so I'll have to do with what the vendor AMI firmware has set up and decode it from the hardware registers. This worked well: there is a tool to generate the PCH GPIO configuration in util/intelp2m which outputs valid C code that can directly be integrated into coreboot. I build a minimal port based on other Intel Coffeelake boards and after fixing a few issues like the console not working and memory init failing, it seemed to have initialised all the PCI devices more or less correctly and got to the payload! The default payload on X86 with coreboot is SeaBIOS. It looks like this payload does not like this board very much though: it hangs in the menu. I never got to boot anything with it. Tianocore (EDK2) proved a much better match and was able to boot from my HDD attached via USB without any issues. Booting the virtual CD-ROM from the BMC also worked like a charm. Coreboot on the ASRock E3C246D4I You can find the code on gerrit. Most things like USB, the 10G NICs, BMC IP-KVM and BMC Serial on Lan are working with that code. What's next: Get a LinuxBoot payload working and write some public documentation on how to set things up for OpenBMC and coreboot for this nice board. Maybe I can also get u-bmc working on this board? A few seconds vs a few hours in compiletime does seem like a compelling argument.

Open source cache as ram with Intel Bootguard

FSP-T in open source projects

X86 CPUs boot up in a very bare state. They execute the first instruction at the top of memory mapped flash in 16 bit real mode. DRAM is not avaible (AMD Zen CPUs are the exception) and the CPU typically has no memory addressable SRAM, a feature which is common on ARM SOCs. This makes running C code quite hard because you are required to have a stack. This was solved on x86 using a technique called cache as ram or CAR. Intel calls this non eviction mode or NEM. You can read more about this here. Coreboot has support for setting up and tearing down CAR with two different codepaths:
  1. Using an open source implementation.
  2. Using a closed source implementation, using FSP-T (TempRamInit) and FSP-M (TempRamExit).
In coreboot the open source implementation is the most used one. For instance all Google chromeos platforms use it, so it's well tested. FSP is a propriatary binary provided by Intel that can be split up into 3 components: FSP-T (which is in charge of setting up the early execution environment), FSP-M (which configures the DRAM controller), FSP-S (further silicon init). With the FSP codepath in coreboot you call into FSP-T using the TempRamInit API to set up the early execution environment in which you can execute C code later on. This binary sets up CAR just like coreboot does, but also does some initial hardware initialisation like setting up PCIe memory mapped configuration space. On most platforms coreboot is fully able to do that early hardware init itself, so that extra initialisation in FSP-T is superfluous. After DRAM has been initialised, you want to tear down the CAR environment to start executing code in actual DRAM. Coreboot can do that using open source code. It's typically just a few lines of assembly code to disable the non-eviction mode that CPU is running in. The other option is to call FSP-M with the TempRamExit API. See FSP v2.0 spec for more information on TempRamInit and TempRamExit . Sidenote: running FSP-T TempRamInit does not necessarily mean you need to run TempRamExit, as it is possible to just reuse the simple coreboot code. This is done on some platforms to avoid problems with TempRamExit. It's generally a very bad idea to give up control of setting up the execution environment to external code. The most important technical reason to not do this, is because coreboot needs to be in control of the caching setup. When that is not the case you encounter all kinds of problems because that assumption is really baked in to many parts of the code. Coreboot has different stages: bootblock, romstage, ramstage and those are actually all separate programs that have their well defined execution environment. If a blob or reference code sets up or changes the execution environment, it makes proper integration much harder. Before Intel started integrating FSP into coreboot, AMD had a go at integrating their reference code, called AGESA into coreboot. Even though AGESA was not provided as blob but as open source code, it had very similar integration issues, for exactly this reason: it messed with the execution environment. As a matter of fact, Intel FSP v1.0 messed up the execution environment so badly that it was deemed fatally flawed. Support for FSP v1.0 was subsequently dropped from the coreboot master branch. So for technical reasons you want to avoid using FSP-T inside coreboot at all costs. From a marketting perspective FSP-T is also a disaster. You really cannot call coreboot an open source firmware project if even setting up the execution environment is delegated to a blob.

Open source cache as ram with Intel Bootguard

One of the reasons why there still is code to integrate FSP-T inside coreboot is for Intel Bootguard support. Here you can read more on our work with that technology. Open source CAR did not work when the Bootguard ACM was run before reset. So with Bootguard, the first instruction that is run on the main CPU is not the reset vector at 0xfffffff0 anymore. The Intel Management Engine, ME validates the Authenticated Code Module or ACM with keys owned by Intel. The ACM code then verifies parts of the main bootfirmware, in this case the coreboot bootblock, with a key owned by the OEM which is fused inside the ME hardware. To do this verification the ACM sets up an execution environment using exactly the same method as the main firmware: using NEM. The reason that open source cache as ram does not work is because the ACM did already set up NEM. So what needs to be done is to skip the NEM setup. You just want to set up a caching environment for the coreboot CAR, fill those cachelines and leave the rest of setup as is. Bootguard capable CPUs have a readonly MSR, with a bit that indicates if NEM setup has already been done by an ACM. When that is the case a different codepath needs to be taken, that avoids setting up NEM again. See CB:36682 and CB:54010. It looks like filling cachelines for CAR is also a bit more tricky and needs more explicit care CB:55791. So with very little code we were able to get bootguard working with open source CAR! Here you see no fspt.bin in cbfs: No fspt.bin in cbfs and here you see that bootblock is run with a working console and that romstage is loaded. This means that cache as ram works as intended. Console and Bootguard success! Cache as Ram without FSP-T worked

What's next?

Given that all Intel Client silicon now work with open source cache as ram including Bootguard support, there are no reasons to keep FSP-T as a supported option for these platforms. There are however still Intel platforms in the coreboot tree that require FSP-T. Skylake-SP, Cooperlake-SP and Denverton-NS depend on the other early hardware init that is done in FSP-T for which there is no open source equivalent in coreboot. This makes FSP-T mandatory on those platforms, for the time being. The advantages of being in control of the execution environment are overwhelming. From personal experience on working with the Cooperlake SP platform, we did regularly hit issues with FSP-T. Sometimes those were bugs inside the FSP-T code that had to be worked around. On other ocassions it was coreboot making assumptions on the bootflow that were not compatible with FSP being in control of the execution environment. I can firmly say that FSP-T causes more troubles than it actually solves, so having that code open sourced is the best strategy. We hope that by setting this good example with open source Bootguard support, others will be incentivised to not rely on FSP-T but pursue open source solutions.

Wrangling the EC: Adventures in Power Sequencing

As we outlined in a previous post, the Librem 14 is the first Purism laptop to ship with our new, free software Librem-EC firmware for the laptop’s embedded controller (EC). This was a big undertaking, and as with any effort of this magnitude, issues arise in corner cases that often don’t show themselves during developmental testing, when only a small number of devices are tested. One such issue was with the power sequencing — the order and timing of all the different voltage rails and power sources/signals in the laptop.

The Problem

We were preparing to ship out the first batch of Librem 14s right as we were putting the finishing touches on the firmware (both for the embedded controller, and the main coreboot/Pureboot firmware), and everything was looking good to finally get devices flashed and into our users’ hands. But as we flashed the laptops’ firmware to prep for shipping, a small but not insignificant number of devices were failing to boot. At first we thought the issue was with our flashing process, which was new for the Librem 14, as both the EC and main firmware are flashed in sequence on a live system. But re-flashing these problematic devices externally (using a USB flash programmer and a chip clip) did not resolve the issue, so the issue wasn’t with the flashing process itself.

Initial Troubleshooting

Since our flashing process didn’t appear to be the issue, the next step was to determine if the issue lie within the EC firmware or coreboot. As it is easier to get live debug output from the EC than coreboot, we started there. We compiled and flashed a debug build of the EC firmware, and attached the debugger. The debug console showed that the EC was booting and properly transitioning the main CPU from off (the S5 power state) to normal boot (the S0 power state). Since the laptop was in S0, it meant that coreboot should be running, so it was time to see what was going on there. Getting debug output from coreboot is slightly more difficult than the EC, as the most common method — serial UART output — isn’t exposed anywhere on the Librem 14’s mainboard. Luckily we have another option: the SPI flash console (developed in-house and upstreamed into coreboot), which writes the coreboot console log to a dedicated region of the firmware flash chip itself. We compiled and flashed a debug build of coreboot with the flash console debugger, attempted a boot, and then read the flash chip back to see where coreboot was dying.

Diving Deeper

Reading back the flash console log showed that RAM init (performed by Intel’s FSP-m blob) was failing, but the error code was so generic it provided no clue as to the root cause of the failure. At this point, we’d need to get serial/UART output somehow, in order to get more info from FSP as to the reason for the failure. Reviewing the schematics showed a promising location to get the UART output (TX) line from the CPU, and next-day delivery from a friendly internet retailer for some needed hardware meant we were in business — or so we thought. Despite verifying all of the hardware separately (and verifying coreboot was correctly configured for UART output), no debug output was received. We ordered more hardware, and spent more time verifying all the links in the chain independently, to no avail.
EC debugging via I2C using an Adafruit Trinket M0 attached to the battery connector
EC debugging via I2C using an Adafruit Trinket M0 attached to the battery connector
As we were unable to get any debug output from coreboot, we wanted to verify that the hardware was working, which we could also do from a booted OS while running the proprietary EC and system firmware. One of the problematic devices was flashed back to “stock” configuration, and booted right up without issue (as expected), but disappointingly still failed to provide any output via the UART debug port. On a whim, we flashed coreboot on the device (with the proprietary EC firmware), and to our surprise it booted right up. This serendipitous occurrence told us that the issue was almost certainly in our Librem EC firmware, not in coreboot.

A Little Primer on Power

To give a bit of background, the power sequence to boot a modern CPU (transition from S5 to S0) is a very, very complicated beast. It requires a specific sequence and precise timing between the EC, PCH, and voltage regulators for the enablement of the power rails and “power good” signals. There’s also an additional low power state (DS5 / deep S5) in which the EC sits when it first gets power (either from the internal battery, or an external power source), before anything else happens. So we need to precisely manage the order and timing of turning on power sources (rails) and signals (“power good”) from DS5 to S5 and then to S0.

A Fresh Approach

When developing the Librem EC firmware, we didn’t start from a completely blank slate. We based our firmware on System 76’s open EC firmware (we forked it, in free software development terms), and we had the source code to the proprietary EC firmware. Both of these had their own baggage though: the original code was used for a different EC chip, and our board design/layout was very different; the proprietary EC code was a spaghetti mess with headers indicating development in the 2006-2009 time frame, and no comments or documentation. Despite these hurdles, we managed to reverse engineer the power sequencing from the proprietary EC code, and then apply it to the Librem EC code, along with all the other customization needed for the Librem 14. And it worked perfectly on all our development machines. But now with large-scale flashing of production devices, we were experiencing boot failures. We knew that 1) the problem was with RAM initialization, and 2) the problem was in the EC firmware. The EC doesn’t directly control the power to the RAM, but it does tell the main CPU (or more precisely, the platform control hub / PCH) that power to the RAM is on and stable via the aforementioned “power good” signals. These signals tell the PCH that it’s safe to turn on other hardware components, and eventually to bring the CPU out of reset and start the boot process (this is the transition from the S5 power state to S0).

Zeroing In

Figuring that these “power good” signals were a good place to start looking, we again compared the proprietary EC code to our Librem EC firmware. This reexamination revealed that although we’d matched the sequence and timing of the power rails/signals, the way certain functions were called in the Librem EC code meant that there was some variability in the timing of the enablement of two of the “power good” signals. Under certain conditions, it was possible for them to be turned on too early, before the voltage regulator had stabilized the power to the RAM. Having identified the potential root cause, we adjusted the Librem EC code to ensure those signals didn’t turn on until the voltage regulator indicated the RAM was ready, crossed our fingers, and began testing. To our collective relief, this small adjustment to the power sequencing did indeed fix the issue. Our factory images were updated, and all previously “bricked” Librem 14s were updated with the new Librem EC firmware.

Hardware is Funny Like That

But the issue only affected a small percentage of Librem 14s – why were some affected and not others? That’s a damn good question, and one we wish we had a good answer for. There’s always some variation in hardware components, and even when all components are within spec, a few microseconds difference across a number of components can add up to enough to make the difference between the laptop booting or not. The good news is now that the issue has been identified and resolved, all future Librem 14s will ship with the updated EC firmware, and existing ones will receive it as an update at some point in the future (less critical here, since these devices are functioning normally). The post Wrangling the EC: Adventures in Power Sequencing appeared first on Purism.

Hardware assisted root of trust mechanism and coreboot internals

I started working for 9elements in October 2020 and my first assignment was to get Intel CBnT working on the OCP Deltalake using coreboot firmware. Intel Converged Bootguard and TXT is a hardware assisted method to set up a root of trust. In this blog post I will discuss some of the changes needed in coreboot to get this working. Setting CBnT up properly was definitely a challenge, but the work did not stop there. So while Intel CBnT provides a method to verify or measure the initial start-up code, it is not enough. You want to trust the code you run from start, the reset vector, to end, typically a bootloader. CBnT only takes care of the start. You have to make sure that each software component trusts the assets it uses and the next program it loads. This concept is called a chain of trust. Now in 2021 I have an assignment that involves supporting the older Intel Bootguard technology. Since Bootguard is very similar to CBnT, I'll also touch on that.

Intel Converged Bootguard and TXT: a root of trust

Intel CBnT merges the functionality provided by TXT and BtG in one Authenticated Code Module (ACM). This is a code module signed by Intel that runs on the main CPU before the traditional x86 reset vector is run at address 0xFFFFFFF0. The job of this ACM is to measure and/or verify the main firmware, depending on the 'profile' that has been set up in the Intel Management Engine (ME). The profile determines what happens in case of a measurement of verification error. Strong policies entirely halt the system in case of failure, while other ones just report errors but still continue booting. The latter is often desirable for servers in a hyperscaler setup such that the system admin can still run diagnostics on the system. A few different components are needed in a working CBnT setup. The ACM was already mentioned. The ACM is found by the hardware by a pointer in the Intel Firmware Interface Table (FIT), which itself is found via a pointer at the fixed location 0xFFFFFFC0. Other necessary components are the Key Manifest (KM) and Boot Policy Manifest (BPM), which are also found in the FIT. The chain of trust is started in the following way: the Intel ME has a fuse which holds the hash of the public key of the KM. This can either be set up with 'fake-fusing' for testing CBnT where the hash can still be changed afterwards. On production systems the hash will be permanently fused. The ACM compares that fused hash to the public key that is inside the KM, which is signed with the KM private key. The KM itself holds a hash of the BPM public key which is compared to the public key stored in the BPM. The BPM is signed with the BPM private key. The role of the BPM is to define what segments of the firmware are Initial Bootblock (IBB). The BPM contains a digest of the IBBs and as such establishes a root of trust if the digest in the BPM matches what is on the flash. More in depth info on this in intel boot guard. This is what Bootguard also did. What CBnT offers on top is the TXT functionality. The IBBs are measured into PCR0 of the TPM. Other TXT functionality like clearing memory or locking down the platform before setting up DRTM with the SINIT ACM is also provided by the CBnT ACM. See the TCG DRTM for more info on this. Merging the TXT functionality makes CBnT ACMs much bigger than BtG ACMs (256K vs 32K depending on the platform). This KM and BPM separation has in mind that there is one hardware owner, but multiple OEMs. The hardware then always gets fused with the key of the owner. Each OEM might want to roll their own firmware and has it's own BPM key. The owner then creates a KM with the OEMs BPM key hash inside it. The OEM can then generate their own BPM that matches the firmware they intend to use. KM and BPM also provide security version numbers that can be enforced. So the same hardware can have different OEMs during its lifetime. The previous OEM won't be able to generate a working CBnT image for the new OEM. The strength of a CBnT setup and the trustworthiness it provides lies in the following things:
  • the signature verification of the ACM done by the ME/Microcode (Intel)
  • ACM signing key remaining private (Intel)
  • the verification done by the ACM, which is a closed binary (it's 256K so not so small)
  • KM keys remaining private (Manufacturer)
  • OEM keys remaining private (OEM), KM security version number can be updated to make OEM keys obsolete though
  • The IBB needs continue the chain of trust which is what the next section will be about

Chain of trust

Coreboot essentially supports 2 different methods for setting up a chain of trust. You have measured boot and verified boot. Verified boot is conceptually easier grasp. Each software component that is run after the bootblock is signed with a private key. The trusted bootblock has the corresponding public key and uses that to verify the integrity of the next program. If the signature does not match the binary then the firmware can report an error to let the user know that their system cannot be trusted or the boot process can even be fully halted if that is desired. Securing your firmware using measured boot works a bit differently and involves a TPM. The idea with measured boot is that before a component is used, it is digested and the hash is stored in the extend-only PCR registers of the TPM. When the boot process is done, the user reads out the TPM and if the PCRs don't match with what you expect, you know that the firmware has been tampered with. A common use case is where the TPM can be read out remotely independent from the host. If the TPM PCR values don't match, that system won't be allowed access to the main network. The admin can then take a look at that system, fix potential issues and allow that system back online after those have been resolved. Historically Google first added verified boot to coreboot with their VBOOT implementation. Measured boot was later added as an optional feature to VBOOT. Now measured boot can be used independently from Google's VBOOT. Google's VBOOT was built with ChromeOS devices in mind. Those devices use very varying hardware: different SOCs from Intel, AMD and many ARM SOC vendors. To support all those different SOCs Google uses a common flash based root of trust. The root of trust lies in a read only region of the flash which is a feature of some SPI flash which is kept in place by holding the /WP pin low. The verification mechanism resides in the read only region and verifies the firmware in FW_MAIN_A/B FMAP partitions. The RO region of the flash does hold a full firmware for recovery. This copy of the firmware is also considered trusted. Such a VBOOT setup does not work that well with a root of trust method like CBnT/Bootguard. With Bootguard some initial parts of the firmware are marked as IBB and the ACM will verify those. It's up to the firmware and assets in those IBB to continue the chain of trust and verify the next components that will be loaded. A fully trusted recovery image in 'RO' region would need to be marked as IBB. CBnT/Bootguard were however not designed for that, as the IBBs are then too big. Only code that is required to set up a chain of trust ought to be marked as IBB, not a full firmware in 'RO' region. In more practical terms, if the bootblock is marked as IBB with Bootguard, the romstage that comes after it cannot be a romstage in the in 'RO' FMAP region as there is no verification on it. VBOOT needs to be modified to only load things from FW_MAIN_A/B. Just not populating the RO FMAP with a romstage is not sufficient. An attacker could just take a working Bootguard image and manually add a romstage in the RO cbfs. The solution is to disable the option for a full recovery bootpath in VBOOT. A note about the future: some work is being done to have per cbfs file verification. This would fit the Bootguard use case much better as it removes the need to be careful about what cannot be in the VBOOT RO region. ​Another difficulty lies in what to mark as IBB. An obvious one is the bootblock as that code gets executed 'first' (well after the ACM has run). But the bootblock accesses other assets, typically quite early in the bootflow. The CPU starts in a bare state: there is no RAM! A solution is to use CPU cache as RAM. This setup is rather tricky and the details are not always public. So sometimes you are obliged to use Intel's FSP-T to set up an environment in which you can execute C code. Calling FSP-T therefore happens in assembly and for this reason verification on the FSP-T binary cannot happen this early. Even finding FSP-T causes problems! FSP-T is a cbfs file and to find cbfs files you have to walk from bottom to top until you find the proper file. This is prone to attacks: someone can modify the image/cbfs such that other non trusted code gets run instead. The solution is to place FSP-T at an address you know at buildtime which the bootblock code jumps to. FSP-T also needs trusted so it has to be marked as IBB and verified by the ACM. Ok, now we are in a C environment but we’re are not there yet. We need to set things up such that we can verify the next parts of the boot process. For that we need the public key which lies in the "GBB" fmap partition. FMAP partitions are found via the "FMAP" fmap partition whose location is known at buildtime. So again both "FMAP" "GBB" need to be marked as IBB, to be verified by the ACM. With VBOOT there is the option to do the verification in a separate stage, verstage. Same problem here too: it's a cbfs file which can only be found at runtime. Here the solution is to link most of the verstage code inside the bootblock. As it turns out this is even a good idea for most x86 platforms using a Google VBOOT setup. You have one stage less so less code duplication. It saves some space and is likely a tiny bit faster as less flash needs to be accessed which is a slow operation. Other things are done in the bootblock like setting up a console. The verbosity of the console is sometimes fetched with board specific methods relying other parts of the flash. So again this needs to be fetched at a location known at buildtime and marked as an IBB or simply avoided or done later in the bootprocess. So the conclusion is that all assets that are used before the chain of trust setup code is run (VBOOT setup or measured boot TPM setup) need to be referenced statically, searching for them cannot be done and they need to marked as IBB with Bootguard.

Converged security suite

CSS is an open source project maintained by 9elements. It is written in go, which makes it quite portable. It's a set of tools and libraries related to firmware and firmware security. One such tool is cbnt-prov. It is integrated in the coreboot buildsystem and can properly set things up for Intel CBnT, by generating a KM and BPM. It parses a coreboot image and detects what segments need to be marked as IBB automatically. It is however not just a coreboot specific tool to glue things together for CBnT. It supports dumping information on the CBnT setup for generic UEFI images too. It can take an existing setup, turn it into a configuration file, which can be reused later on, for instance if you want to deploy the same firmware but with different keys. One last important feature is to be able to do validation on an existing image. We are working hard on an equivalent tool for bootguard that will be called bg-prov. We hope to get this ready for production soon. ​This is a big step forward in the usability of coreboot as previously you were bound to proprietary tools provided by Intel that were only accessible under NDA and has usability issues as they are Microsoft Windows executables. Coreboot is the best open source X86 firmware at this day and having fully free and open source software to cover the common use case of Intel Bootguard and CBnT makes coreboot a more attractive firmware solution. We hope that this improves its market adoption!

The Future of Open-Source Firmware on Server Systems

The Future of Open-Source Firmware on Server SystemsOpen-Source Firmware for Host Processors is already quite prominent in the embedded world - we do have a lot of systems running on u-boot or nearly every Chromebook which is not older than 5 years is running on coreboot (Surprise, Surprise!). In addition Intel does officially support coreboot and their Firmware Support Package (FSP) in API mode, which is mandatory to be supported in coreboot. So future is looking bright for the embedded world - But what does the server market looks like? Disclaimer: All information have been gathered together through public documents or talks on conferences - none of this information has been officially confirmed by any of the SoC vendors.
The answer is: It depends. It depends on whom you're talking to and which SoC vendor you are talking about.

Intel

Intel announced on the OCP Tech Week 2020 (again), that they will support FSP & coreboot on IceLake platforms and beyond - that means that the new upcoming platform Sapphire Rapids SP will also be supported with coreboot and FSP. Intel made quite a transformation over the latest generations of Xeon-SP platforms. There is a Proof of Concept with coreboot and FSP with Skylake-SP on the two socket server platform OCP Tioga Pass. This landed upstream in the coreboot repositories and is still maintained and functional. However the access to the FSP needed to build the platform is still under NDA and will not be maintained by Intel anymore.
The Future of Open-Source Firmware on Server Systems
OCP Community Logo - 9element is supporting the OCP as an OCP Community Member
The next step was to enable coreboot on the current generation Cooper Lake. This was done on the OCP Delta Lake server, which is a single socket server running coreboot and FSP. Interestingly, the OCP Delta Lake is the first server to pass the OCP Open System Firmware (OCP OSF) Guidelines which are mandatory now for new platforms to get the OCP Accepted Certification. More information on the OCP OSF initiative can be found online.
The Future of Open-Source Firmware on Server Systems
Summary Intel Roadmap for Xeon-SP platforms
On the latest OCP OSF Project Call from April 29th, 2021 AMI, one of the biggest closed-source IBV, stated to get involved in Open-System Firmware. This should give the OSF community even more confidence that Intel holds on their open-source firmware strategy. What a bright future!

ARM

Ampere Computing is a regular guest on the OCP OSF call and presented their LinuxBoot solution on the latest OCP Tech Week 2020. Also Ampere's Arjun Khare presented the latest open-source firmware efforts on the FOSDEM 2021 - by nature ARM always has been more open when it comes to firmware than x86 SoC vendors. Ampere is definitely one of the companies to watch out for in the open-source firmware space. In general ARM is picking up more and more speed in the server world and might be moving into the broader spectrum.

AMD

Even though AMD is heavily involved in open-source firmware on their consumer platforms - nothing is publicly known yet about their efforts to support open-source firmware on server platforms. Our friends from 3mdeb made a presentation at the Fosdem2021 about the current state of OSF on AMD platforms. Bottom line: Nothing is publicly known, however AMD is hiring coreboot developers (mainly for their mobile line) but rumors go around that they're working on something. One main push could have been Ron's presentation on the Open-Source Firmware Conferences 2020 on booting an AMD Rome server board with open-source firmware - This has caught quite some attention. Still AMD has not made any information public - so we need to wait if there is more to come.

TL;DR

Intel is currently one of the top-pushing companies in the open-source firmware space. Also the OCP's Open System Firmware initiative is redefining the boundaries for server systems - overall we do quite some movement in the open-source firmware world - however most of the information is still not publicly confirmed and can only be shared through NDA's. We hope this changes in the future. 9elements does have a good working relationship with various SoC vendors - we specialized on building open-source firmware for scalable server systems and are able to support the newest generations. We are working on a regular basis with OCP and other scalable server systems. If you would like to talk about OSF on a server system - Get in touch with us!

Announcing coreboot 4.14

coreboot 4.14 was released today, on May 10th, 2021.

Since 4.13 there have been 3660 new commits by 215 developers.
Of these, about 50 contributed to coreboot for the first time.
Welcome to the project!

These changes have been all over the place, so that there's no
particular area to focus on when describing this release: We had
improvements to mainboards, to chipsets (including much welcomed
work to open source implementations of what has been blobs before),
to the overall architecture.

Thank you to all developers who made coreboot the great open source
firmware project that it is, and made our code better than ever.

New mainboards
--------------

* AMD Bilby
* AMD Majolica
* GIGABYTE GA-D510UD
* Google Blipper
* Google Brya
* Google Cherry
* Google Collis
* Google Copano
* Google Cozmo
* Google Cret
* Google Drobit
* Google Galtic
* Google Gumboz
* Google Guybrush
* Google Herobrine
* Google Homestar
* Google Katsu
* Google Kracko
* Google Lalala
* Google Makomo
* Google Mancomb
* Google Marzipan
* Google Pirika
* Google Sasuke
* Google Sasukette
* Google Spherion
* Google Storo
* Google Volet
* HP 280 G2
* Intel Alderlake-M RVP
* Intel Alderlake-M RVP with Chrome EC
* Intel Elkhartlake LPDDR4x CRB
* Intel shadowmountain
* Kontron COMe-mAL10
* MSI H81M-P33 (MS-7817 v1.2)
* Pine64 ROCKPro64
* Purism Librem 14
* System76 darp5
* System76 galp3-c
* System76 gaze15
* System76 oryp5
* System76 oryp6

Removed mainboards
------------------

* Google Boldar
* Intel Cannonlake U LPDDR4 RVP
* Intel Cannonlake Y LPDDR4 RVP

Deprecations and incompatible changes
-------------------------------------

### SAR support in VPD for Chrome OS

SAR support in VPD has been deprecated for Chrome OS platforms for > 1
year now. All new Chrome OS platforms have switched to using SAR
tables from CBFS. For the next release, coreboot is updated to align
with the Chrome OS factory changes and hence SAR support in VPD is
deprecated in [CB:51483](https://review.coreboot.org/51483). Starting
with this release, anyone building coreboot for an already released
Chrome OS platform with SAR table in VPD will have to extract the
"wifi_sar" key from VPD and add it as a file to CBFS using following
steps:
 * On DUT, read SAR value using `vpd -i RO_VPD -g wifi_sar`
 * In coreboot repo, generate CBFS SAR file using:
    `echo ${SAR_STRING} > site-local/${BOARD}-sar.hex`
 * Add to site-local/Kconfig:
    ```
     config WIFI_SAR_CBFS_FILEPATH
       string
       default "site-local/${BOARD}-sar.hex"
    ```

### CBFS stage file format change

[CB:46484](https://review.coreboot.org/46484) changed the in-flash
file format of coreboot stages to prepare for per-file signature
verification. As described in the commit message in more details,
when manipulating stages in a CBFS, the cbfstool build must match the
coreboot image so that they're using the same format: coreboot.rom
and cbfstool must be built from coreboot sources that either both
contain this change or both do not contain this change.

Since stages are usually only handled by the coreboot build system
which builds its own cbfstool (and therefore it always matches
coreboot.rom) this shouldn't be a concern in the vast majority of
scenarios.

Significant changes
-------------------

### AMD SoC cleanup and initial Cezanne APU support

There's initial support for the AMD Cezanne APUs in the tree. This code
hasn't started as a copy of the previous generation, but was based on a
slightly modified version of the example/min86 SoC. During the cleanup
of the existing Picasso SoC code the common parts of the code were
moved to the common AMD SoC code, so that they could be used by the
Cezanne code instead of adding another slightly different copy.

### X86 bootblock layout

The static size C_ENV_BOOTBLOCK_SIZE was mostly dropped in favor of
dynamically allocating the stage size; the Kconfig is still available
to use as a fixed size and to enforce a maximum for selected chipsets.
Linker sections are now top-aligned for a reduced flash footprint and to
maintain the requirements of near jump from reset vector.

### ACPI GNVS framework

SMI handlers for APM_CNT_GNVS_UDPATE were dropped; GNVS pointer to SMM is
now passed from within SMM_MODULE_LOADER. Allocation and initialisations
for common ACPI GNVS table entries were largely moved to one centralized
implementation.

### Intel Xeon Scalable Processor support is now considered mature

Intel Xeon Scalable Processor (Xeon-SP) family [1] is designed
primarily to serve the needs of the server market.

coreboot support for Xeon-SP is in src/soc/intel/xeon_sp directory.
This release has support for SkyLake-SP (SKX-SP) which is the 2nd
generation, and for CooperLake-SP (CPX-SP) which is the 3rd generation
or the latest generation [2] on market.

With this release, the codebase for multiple generations of Xeon-SP
were unified and optimized:
* SKX-SP SoC code is used in OCP TiogaPass mainboard [3]. Support for
this board is in Proof Of Concept Status.
* CPX-SP SoC code is used in OCP DeltaLake mainboard. Support for
this board is in DVT (Design Validation Test) exit equivalent status.
Features supported, (performance/stability) test scopes, known issues,
features gaps are described in [4].


[1] https://www.intel.com/content/www/us/en/products/details/processors/xeon/scalable.html
[2] https://www.intel.com/content/www/us/en/products/docs/processors/xeon/3rd-gen-xeon-scalable-processors-brief.html
[3] ../mainboard/ocp/tiogapass.md
[4] ../mainboard/ocp/deltalake.md

Netboot.xyz is now Part of LinuxBoot

Netboot.xyz is now Part of LinuxBootWe recently worked on some patches to adopt netboot.xyz and integrate it into LinuxBoot - and it got merged now.

Netboot.xyz

So - what is netboot.xyz? From there website:
netboot.xyz is a way  to PXE boot various operating system installers or utilities from one  place within the BIOS without the need of having to go retrieve the  media to run the tool.
In our last blog article we already pointed out some development work and what motivated us - basically we need a reliable way to install operating systems on machines sitting either somewhere in a rack not accessible for us, or which do not have any external USB ports. Our former way was to build a busybox image which downloads a disk image containing a minimal Linux operation system into the RAM. Once downloaded we would dd the image on a hard drive - and off you go. However that approach needed a lot of manual tooling and adjustment to the current platform we are working on - and netboot.xyz already has a process in place - so adopting this to u-root only seems logical. It's open-source, that's the idea right?

netboot.xyz Image Generation Process

netboot.xyz already has an image processing and generation process in place which we will use to download the images from u-root.
Netboot.xyz is now Part of LinuxBoot
Abstract Image Processing and Generation Process from netboot.xyz
All the assets generated by the netboot.xyz build pipelines are accumulated in one .yaml file which can be found on Github. These endpoints.yaml file does contain kernel, initrd and squashfs locations in the following manner:
ubuntu-19.10-live-kernel:
    path: /ubuntu-core-19.10/releases/download/19.10-055f9330/
    files:
    - initrd
    - vmlinuz
    os: ubuntu
    version: '19.10'
[...]    
ubuntu-19.10-KDE-squash:
    path: /ubuntu-squash/releases/download/9854741e-b243fefb/
    files:
    - filesystem.squashfs
    os: ubuntu
    version: '19.10'
    flavor: KDE
    kernel: ubuntu-19.10-live-kernel
This endpoints.yaml file is used to build the u-root netboot.xyz menu:
Netboot.xyz is now Part of LinuxBoot
netboot.xyz menu in u-root
Typing the number of the OS opens the submenu which let's you choose the version of the Operating System you want to boot.
Netboot.xyz is now Part of LinuxBoot
Typing in e.g. 07 will boot Debian 10 Core. Be aware - only some major distrobutions have been tested and verified working - Everything in the Other menu can be deemed has experimental and might not work properly.
netboot.xyz provides you a convinent way on how to boot into a live system on your machine. As we are working a lot with server machines where we do not have direct hardware access to, merging netboot.xyz into u-root gives us an easy way to install an operating system on a remote machine during development. If you like to know more about netboot.xyz, check out their homepage. The corresponding code in u-root can be found here. If you like to talk with us about firmware - feel free to contact us!

A short journey to x86 long mode in coreboot on recent Intel platforms

While it was difficult to add initial x86_64 support in coreboot, as described in my last blog article how-to-not-add-x86_64-support-to-coreboot it was way easier on real hardware. During the OSFC we did a small hackathon at 9elements and got x86_64 working in coreboot on recent Intel platforms. If you want to test new code that deals with low level stuff like enabling x86_64 mode in assembly, it's always good to test it on qemu using KVM. It runs the code in ring 0 instead of emulating every single instruction and thus is very close to bare metal machines.

$ qemu-system-x86_64 -M q35 -accel kvm -bios build/coreboot.rom

But, running coreboot's x86_64 code on KVM gave more magic errors than you could find in books about some famous magician with a scarf on his forehead. To summarize:
  • On recent AMD platforms it stops after entering x86_64 long mode.
  • On older Intel platforms everything works.
  • On recent Intel platforms after entering long mode every instruction causes a fault, and thus the instruction is emulated by the kernel, which doesn't handle FPU instruction that well...
  • On recent Intel platforms the MMU aborts walking page tables and returns the data within the page table itself when looking up some virtual addresses...
Tests on real hardware, in this case the X11SSH-TF showed none of the problems above. After fixing integer-void pointer conversion (CB:48166) , (CB:48167) , (CB:48177) and adding assembly code to enable long mode in Cache-As-RAM (CB:48170) it booted straight to romstage. first boot console log:

coreboot-4.13-241-g52ab788549-dirty Tue Dec 1 18:23:08 UTC 2020 bootblock starting (log level: 7)... CPU: Intel(R) Xeon(R) CPU E3-1240 v6 @ 3.70GHz CPU: ID 906e9, Kabylake H B0, ucode: 000000d5 CPU: AES supported, TXT supported, VT supported MCH: device id 5918 (rev 05) is Kabylake DT 2 PCH: device id a149 (rev 31) is Skylake PCH-H C236 IGD: device id ffff (rev ff) is Unknown FMAP: Found "FLASH" version 1.1 at 0xb10000. FMAP: base = 0xff000000 size = 0x1000000 #areas = 4 FMAP: area COREBOOT found @ b10200 (5176832 bytes) CBFS: Found 'fallback/romstage' @0x80 size 0xe334 BS: bootblock times (exec / console): total (unknown) / 53 ms

coreboot-4.13-241-g52ab788549-dirty Tue Dec 1 18:23:08 UTC 2020 romstage starting (log level: 7)... pm1_sts: 0900 pm1_en: 4000 pm1_cnt: 00000000 gpe0_sts[0]: 00000000 gpe0_en[0]: 00000000 gpe0_sts[1]: 00000000 gpe0_en[1]: 00000000 gpe0_sts[2]: 00000000 gpe0_en[2]: 00000000 gpe0_sts[3]: 00000000 gpe0_en[3]: 00000000 TCO_STS: 0000 0000 GEN_PMCON: e0810200 000018c8 GBLRST_CAUSE: 00000002 00000000 prev_sleep_state 0 FMAP: area COREBOOT found @ b10200 (5176832 bytes) CBFS: Found 'fspm.bin' @0x5fdc0 size 0x63000 POST: 0x34 FMAP: area RW_MRC_CACHE found @ b00000 (65536 bytes) MRC: no data in 'RW_MRC_CACHE' No memory dimm at address A2 No memory dimm at address A4 POST: 0x36 POST: 0x92 ghost It hung at entering FSP-M, which as it's a binary blob, wasn't automatically recompiled to x86_64. A wrapper (CB:48175) , written in assembly, fixed the problem by falling back to x86_32 when calling into FSP. The wrapper will automatically switch back into x86_64 mode when the function returns. This is slow, but as we don't have proper blobs there's no other way around it. memory init console log:

coreboot-4.13-242-g04129be978-dirty Tue Dec 1 18:42:20 UTC 2020 romstage starting (log level: 7)... pm1_sts: 0900 pm1_en: 0000 pm1_cnt: 00000000 gpe0_sts[0]: 00000000 gpe0_en[0]: 00000000 gpe0_sts[1]: 00000000 gpe0_en[1]: 00000000 gpe0_sts[2]: 00000000 gpe0_en[2]: 00000000 gpe0_sts[3]: 00000000 gpe0_en[3]: 00000000 TCO_STS: 0000 0000 GEN_PMCON: e0810200 000018c8 GBLRST_CAUSE: 00000002 00000000 prev_sleep_state 0 FMAP: area COREBOOT found @ b10200 (5176832 bytes) CBFS: Found 'fspm.bin' @0x5fdc0 size 0x63000 POST: 0x34 FMAP: area RW_MRC_CACHE found @ b00000 (65536 bytes) MRC: no data in 'RW_MRC_CACHE' No memory dimm at address A2 No memory dimm at address A4 POST: 0x36 POST: 0x92 POST: 0x98 FspMemoryInit returned 0x80000002 POST: 0xe3 FspMemoryInit returned an error!

The FSP was now able to run, but it returned an error Invalid parameter, which was due to the fact that FSP's config structures contained void pointers, which on x86_64 have a different size and doesn't match what FSP expects. Fixing those headers is an ongoing tasks, but was hacked around. SMM stack trash console log:

IOAPIC: Initializing IOAPIC at 0xfec00000 IOAPIC: Bootstrap Processor Local APIC = 0x00 IOAPIC: ID = 0x02 PCI: 00:1f.0 init finished in 9 msecs POST: 0x75 POST: 0x75 PCI: 00:1f.2 init RTC Init Set power on after power failure. Disabling ACPI via APMC.

coreboot-4.13-241-g52ab788549-dirty Tue Dec 1 18:23:08 UTC 2020 smm starting (log level: 7)... SMI_STS: PM1 APM SMI#: ACPI disabled. canary 0xcdcdcdcd7f9ff800 != 0x7f9ff800 SMM Handler caused a stack overflow ghostFinally it booted into SMM, but crashed due to stack trashing. That turned out to be a false positive, as the stack canary is the size of a void pointer and is written in x86_32 assembly, but checked in x86_64 C code and thus failed. Writing 4 additional bytes in assembly code fixed the stack canary check and it finally booted.(CB:48215) patch:

/* Write canary to the bottom of the stack */ movl   stack_size, %eax subl   %eax, %ebx /* %ebx(stack_top) - size = %ebx(stack_bottom) */ movl   %ebx, (%ebx) + #if ENV_X86_64 +   movl   $0, 4(%ebx) + #endif

Summarizing it took about a day to add x86_64 support and half of the code needed to be written in assembly code. With those patches in place it should be easier to port additional platforms to x86_64, reducing the time to a few hours. I invite everyone to play with the changes, hack the code and improve it to make this open source project even more awesome.

Announcing coreboot 4.13

coreboot 4.13 was released on November 20th, 2020.

Since 4.12 there were 4200 new commits by over 234 developers. Of these, about 72 contributed to coreboot for the first time.

Thank you to all developers who again helped made coreboot better than ever, and a big welcome to our new contributors!

New mainboards

  • Acer G43T-AM3
  • AMD Cereme
  • Asus A88XM-E FM2+
  • Biostar TH61-ITX
  • BostenTech GBYT4
  • Clevo L140CU/L141CU
  • Dell OptiPlex 9010
  • Example Min86 (fake board)
  • Google Ambassador
  • Google Asurada
  • Google Berknip
  • Google Boldar
  • Google Boten
  • Google Burnet
  • Google Cerise
  • Google Coachz
  • Google Dalboz
  • Google Dauntless
  • Google Delbin
  • Google Dirinboz
  • Google Dooly
  • Google Drawcia
  • Google Eldrid
  • Google Elemi
  • Google Esche
  • Google Ezkinil
  • Google Faffy
  • Google Fennel
  • Google Genesis
  • Google Hayato
  • Google Lantis
  • Google Lindar
  • Google Madoo
  • Google Magolor
  • Google Metaknight
  • Google Morphius
  • Google Noibat
  • Google Pompom
  • Google Shuboz
  • Google Stern
  • Google Terrador
  • Google Todor
  • Google Trembyle
  • Google Vilboz
  • Google Voema
  • Google Volteer2
  • Google Voxel
  • Google Willow
  • Google Woomax
  • Google Wyvern
  • HP EliteBook 2560p
  • HP EliteBook Folio 9480m
  • HP ProBook 6360b
  • Intel Alderlake-P RVP
  • Kontron COMe-bSL6
  • Lenovo ThinkPad X230s
  • Open Compute Project DeltaLake
  • Prodrive Hermes
  • Purism Librem Mini
  • Purism Librem Mini v2
  • Siemens Chili
  • Supermicro X11SSH-F
  • System76 lemp9

Removed mainboards

  • Google Cheza
  • Google DragonEgg
  • Google Ripto
  • Google Sushi
  • Open Compute Project SonoraPass

Significant changes

Native refcode implementation for Bay Trail

Bay Trail no longer needs a refcode binary to function properly. The refcode was reimplemented as coreboot code, which should be functionally equivalent. Thus, coreboot only needs to run the MRC.bin to successfully boot Bay Trail.

Unusual config files to build test more code

There’s some new highly-unusual config files, whose only purpose is to coerce Jenkins into build-testing several disabled-by-default coreboot config options. This prevents them from silently decaying over time because of build failures.

Initial support for Intel Trusted eXecution Technology

coreboot now supports enabling Intel TXT. Though it’s not feature-complete yet, the code allows successfully launching tboot, a Measured Launch Environment. It was tested on Haswell using an Asrock B85M Pro4 mainboard with TPM 2.0 on LPC. Though support for other platforms is still not ready, it is being worked on. The Haswell MRC.bin needs to be patched so as to enable DPR. Given that the MRC binary cannot be redistributed, the best long-term solution is to replace it.

Hidden PCI devices

This new functionality takes advantage of the existing ‘hidden’ keyword in the devicetree. Since no existing boards were using the keyword, its usage was repurposed to make dealing with some unique PCI devices easier. The particular case here is Intel’s PMC (Power Management Controller). During the FSP-S run, the PMC device is made hidden, meaning that its config space looks as if there is no device there (Vendor ID reads as 0xFFFF_FFFF). However, the device does have fixed resources, both MMIO and I/O. These were previously recorded in different places (MMIO was typically an SA fixed resource, and I/O was treated as an LPC resource). With this change, when a device in the tree is marked as ‘hidden’, it is not probed (pci_probe_dev()) but rather assumed to exist so that its resources can be placed in a more natural location. This also adds the ability for the device to participate in SSDT generation.

Tools for generating SPDs for LP4x memory on TGL and JSL

A set of new tools gen_spd.go and gen_part_id.go are added to automate the process of generating SPDs for LP4x memory and assigning hardware strap IDs for memory parts used on TGL and JSL based boards. The SPD data obtained from memory part vendors has to be massaged to format it correctly as per JEDEC and Intel MRC expectations. These tools take a list of memory parts describing their physical attributes as per their datasheet and convert those attributes into SPD files for the platforms. More details about the tools are added in README.md.

New version of SMM loader

A new version of the SMM loader which accommodates platforms with over 32 CPU threads. The existing version of SMM loader uses a 64K code/data segment and only a limited number of CPU threads can fit into one segment (because of save state, STM, other features, etc). This loader extends beyond the 64K segment to accommodate additional CPUs and in theory allows as many CPU threads as possible limited only by SMRAM space and not by 64K. By default this loader version is disabled. Please see cpu/x86/Kconfig for more info.

Address Sanitizer

coreboot now has an in-built Address Sanitizer, a runtime memory debugger designed to find out-of-bounds access and use-after-scope bugs. It is made available on all x86 platforms in ramstage and on QEMU i440fx, Intel Apollo Lake, and Haswell in romstage. Further, it can be enabled in romstage on other x86 platforms as well. Refer ASan documentation for more info.

Initial support for x86_64

The x86_64 code support has been revived and enabled for QEMU. While it started as PoC and the only supported platform is an emulator, there’s interest in enabling additional platforms. It would allow to access more than 4GiB of memory at runtime and possibly brings optimised code for faster execution times. It still needs changes in assembly, fixed integer to pointer conversions in C, wrappers for blobs, support for running Option ROMs, among other things.

Preparations to minimize enabling PCI bus mastering

For security reasons, bus mastering should be enabled as late as possible. In coreboot, it’s usually not necessary and payloads should only enable it for devices they use. Since not all payloads enable bus mastering properly yet, some Kconfig options were added as an intermediate step to give some sort of “backwards compatibility”, which allow enabling or disabling bus mastering by groups.

Currently available groups are:

  • PCI bridges
  • Any devices

For now, “Any devices” is enabled by default to keep the traditional behaviour, which also includes all other options. This is currently necessary, for instance, for libpayload-based payloads as the drivers don’t enable bus mastering for PCI bridges.

Exceptional cases, that may still need early bus master enabling in the future, should get their own per-reason Kconfig option. Ideally before the next release.

Early runtime configurability of the console log level

Traditionally, we didn’t allow the log level of the romstage console to be changed at runtime (e.g. via get_option()). It turned out that the technical constraints for this (no global variables in romstage) vanished long ago, though. The new behaviour is to query get_option() now from the second stage that uses the console on. In other words, if the bootblock already enables the console, the romstage log level can be changed via get_option(). Keeping the log level of the first console static ensures that we can see console output even if there’s a bug in the more involved code to query options.

Resource allocator v4

A new revision of resource allocator v4 is now added to coreboot that supports mutiple ranges for allocating resources. Unlike the previous allocator (v3), it does not use the topmost available window for allocation. Instead, it uses the first window within the address space that is available and satisfies the resource request. This allows utilization of the entire available address space and also allows allocation above the 4G boundary. The old resource allocator v3 is still retained for some AMD platforms that do not conform to the requirements of the allocator.

Deprecations

PCI bus master configuration options

In order to minimize the usage of PCI bus mastering, the options we introduced in this release will be dropped in a future release again. For more details, please see Preparations to minimize enabling PCI bus mastering.

Resource allocator v3

Resource allocator v3 is retained in coreboot tree because the following platforms do not conform to the requirements of the resource allocation i.e. not all the fixed resources of the platform are provided during the read_resources() operation:

  • northbridge/amd/pi/00630F01
  • northbridge/amd/pi/00730F01
  • northbridge/amd/pi/00660F01
  • northbridge/amd/agesa/family14
  • northbridge/amd/agesa/family15tn
  • northbridge/amd/agesa/family16kb

In order to have a single unified allocator in coreboot, this notice is being added to ensure that the platforms listed above are fixed before the next release. If there is interest in maintaining support for these platforms beyond the next release, please ensure that the platforms are fixed to conform to the expectations of resource allocation.

[GSoC] libgfxinit: Add support for Bay Trail

Hello everyone. I’ve been working on adding Bay Trail support to libgfxinit as a GSoC project. Yes, as I don’t usually talk much outside of IRC and Gerrit, I would imagine this post would come up as a surprise to most people. Despite the journey being way more difficult than initially foreseen, I eventually managed to get most of what I could test on Bay Trail working, with next to no spaghetti-looking code.

The commits adding Bay Trail support to libgfxinit and integration with coreboot can be retrieved with this Gerrit query. Additionally, the coreboot port for the Asrock Q1900M mainboard used for testing can be found on this Gerrit change.

I ran into several problems while working on this GSoC project, and submitted various fixes and improvements. Links to these commits can be found in later sections. Strictly speaking, these commits are not directly related to this GSoC project, but they spurred when working on GSoC.

Unfortunately, I ran into multiple setbacks, which precluded me from completing everything I had originally planned within the GSoC timeframe:

  • Since I only managed to fix some bugs last-minute, the code has not been formally verified yet. Nevertheless, formally verifying the code before it works and has been reviewed is rather pointless, since it needs to be verified again after amending it.
  • DisplayPort and integrated panel support could not be tested due to inaccessible hardware. I had to take a plane from the university campus to home on June and it was impossible to squeeze both the Asrock Q1900M and the Asus X551MA in my luggage. I decided to bring the Asrock Q1900M, as it is more compact and easier to work with.
  • There was no time left to work on Braswell support. While Bay Trail and Braswell are somewhat related, there are many differences regarding the undocumented parts of the hardware, and there’s even less documentation.

Undeterred by any misfortunes, I am going to finish what I started, come hell or high water.

Project details

libgfxinit is a graphics initialization (aka modesetting) library for embedded environments. It currently supports only Intel hardware, more specifically the Intel Core processor line. It can query and set up most kinds of displays based on their EDID information. You can, however, also specify particular mode lines.

Support for the Intel Bay Trail platform is was missing in libgfxinit. The code hasn’t landed upstream yet, so one would need to fetch it from Gerrit in order to use it. This involves fetching the libgfxinit patches first, using the checkout download option on CB:42359 (the topmost change), and then cherry-picking CB:44071 and CB:44072 into coreboot. CB:44938 and CB:39658 show how to enable libgfxinit for Bay Trail mainboards. Since the available video ports is mainboard-specific, gma-mainboard.ads needs to be adjusted accordingly.

Trials and Tribulations

The hardware is cursed

Getting software to work usually takes some testing, but when said software interacts with hardware, testing becomes essential. And when said hardware is largely undocumented, testing is pretty much the only option. The Display chapters of the graphics programming manuals for these platforms lack the information that matters for libgfxinit. Even the Bay Trail documentation turned out to be incomplete, especially regarding the display PHY and PLL registers. When working on libgfxinit, I soon got Bay Trail to show something on a monitor. However, making that work reliably took much longer than I expected. This was mainly because I needed to spend at least a day or two without looking at the code to see what was wrong with it.

Said PHY and PLL registers are hidden within IOSF-SB, a sideband interconnect network accessed through a mailbox-style interface. To access a register, not only does one need to read or write the register contents, but also needs to program the destination port (address of the hardware block), opcode (which type of read or write) and register offset, and then poll a busy bit until the operation is complete. Of course, this register access mechanism is not described in the graphics documentation, so the only references are existing graphics drivers. Reading someone else’s code in order to understand what documentation should say is, at best, downright painful.

After I managed to get something to show on the screen, I noticed that this would only work on very specific system states. In addition, manually (using the intel_reg utility) writing several undocumented registers before running gfx_test would sometimes help. I eventually figured out that most of the accesses to undocumented registers did not have any effect, because of a blunder in the IOSF accessor library I wrote: I messed up the bitfields when assembling the request register (contains the target port, opcode and some always-one bits), so the accesses would often end up going to the wrong port.

There’s always more bugs

The Bay Trail code in coreboot was only used by a single mainboard: the Google Rambi chromebook/chromebox/chromebase family. Memory initialization is done by a binary-only executable, which contains Intel’s MRC (Memory Reference Code). This binary is simply called “MRC binary” or mrc.bin (the file’s name). However, it is actually an ELF binary, and the Makefile in coreboot will place it at a different offset depending on the file extension. So, Bay Trail has a mrc.elf instead: using the mrc.bin name will place the binary at the wrong offset, and won’t work.

Once this mystery was solved, the MRC on the Asrock Q1900M would not detect any DIMMs. Turns out SMBus support in MRC is broken, so one needs to read the SPD contents into a buffer, and then pass that buffer to the MRC. CB:44092 takes care of that.

Even then, MRC would still refuse to work on the Asrock Q1900M. After some digging, it is because it checks the memory type in the SPD, and bails out if they are not SO-DIMMs or do not support 1.35V operation. The Asrock Q1900M uses full-size desktop UDIMMs, which may not always support 1.35V operation. CB:39568 patches the necessary values in the SPD buffer so that MRC will function as intended.

Externally-induced translocation

I don’t mind running errands or going out in general. However, I do utterly despise having to move and live elsewhere: I have to pack my computers and parts, and I have lots of them. I live in an archipelago, I don’t have my own house nor car yet, and my parents’ home and the university campus aren’t on the same island. Dad usually comes with his car at the start/end of the school year, as I need to move lots of stuff. Oh, and my parents are divorced, so my sister and I go back and forth when not abroad.

Because of the coronavirus outbreak, in-person lessons were suspended for the rest of the academic year. Most people living in the university campus (there’s a students’ residence in there) went back home almost immediately. I didn’t, because I didn’t feel like taking a plane amidst the outbreak and preferred to stay in my cozy server dorm room. However, as there were no more in-person lessons, the residence had to close, and I eventually had to leave. Moreover, Dad couldn’t come this time because he was overwhelmed by work (he was unable to work during lockdown, so everything piled up until the lockdown ended). So, I had to take a plane and leave most of my stuff in the residence, including all of my monitors with digital inputs and one of my two Bay Trail machines, which I had planned on using for GSoC.

And if that wasn’t enough, I’ve had to pack my things again, every week. This means each week only had six useful days, at best. This, plus everything else going on at home, quickly burned me up. It reached a point where I couldn’t bear any longer and had to take a two-week break from coreboot development.

Conclusion

Although there were many unforeseen hurdles and problems around every corner, I would still call this a huge success. Just like university assignments, it has been rushed down to the last minute.