Announcing coreboot 4.13

coreboot 4.13 was released on November 20th, 2020.

Since 4.12 there were 4200 new commits by over 234 developers. Of these, about 72 contributed to coreboot for the first time.

Thank you to all developers who again helped made coreboot better than ever, and a big welcome to our new contributors!

New mainboards

  • Acer G43T-AM3
  • AMD Cereme
  • Asus A88XM-E FM2+
  • Biostar TH61-ITX
  • BostenTech GBYT4
  • Clevo L140CU/L141CU
  • Dell OptiPlex 9010
  • Example Min86 (fake board)
  • Google Ambassador
  • Google Asurada
  • Google Berknip
  • Google Boldar
  • Google Boten
  • Google Burnet
  • Google Cerise
  • Google Coachz
  • Google Dalboz
  • Google Dauntless
  • Google Delbin
  • Google Dirinboz
  • Google Dooly
  • Google Drawcia
  • Google Eldrid
  • Google Elemi
  • Google Esche
  • Google Ezkinil
  • Google Faffy
  • Google Fennel
  • Google Genesis
  • Google Hayato
  • Google Lantis
  • Google Lindar
  • Google Madoo
  • Google Magolor
  • Google Metaknight
  • Google Morphius
  • Google Noibat
  • Google Pompom
  • Google Shuboz
  • Google Stern
  • Google Terrador
  • Google Todor
  • Google Trembyle
  • Google Vilboz
  • Google Voema
  • Google Volteer2
  • Google Voxel
  • Google Willow
  • Google Woomax
  • Google Wyvern
  • HP EliteBook 2560p
  • HP EliteBook Folio 9480m
  • HP ProBook 6360b
  • Intel Alderlake-P RVP
  • Kontron COMe-bSL6
  • Lenovo ThinkPad X230s
  • Open Compute Project DeltaLake
  • Prodrive Hermes
  • Purism Librem Mini
  • Purism Librem Mini v2
  • Siemens Chili
  • Supermicro X11SSH-F
  • System76 lemp9

Removed mainboards

  • Google Cheza
  • Google DragonEgg
  • Google Ripto
  • Google Sushi
  • Open Compute Project SonoraPass

Significant changes

Native refcode implementation for Bay Trail

Bay Trail no longer needs a refcode binary to function properly. The refcode was reimplemented as coreboot code, which should be functionally equivalent. Thus, coreboot only needs to run the MRC.bin to successfully boot Bay Trail.

Unusual config files to build test more code

There’s some new highly-unusual config files, whose only purpose is to coerce Jenkins into build-testing several disabled-by-default coreboot config options. This prevents them from silently decaying over time because of build failures.

Initial support for Intel Trusted eXecution Technology

coreboot now supports enabling Intel TXT. Though it’s not feature-complete yet, the code allows successfully launching tboot, a Measured Launch Environment. It was tested on Haswell using an Asrock B85M Pro4 mainboard with TPM 2.0 on LPC. Though support for other platforms is still not ready, it is being worked on. The Haswell MRC.bin needs to be patched so as to enable DPR. Given that the MRC binary cannot be redistributed, the best long-term solution is to replace it.

Hidden PCI devices

This new functionality takes advantage of the existing ‘hidden’ keyword in the devicetree. Since no existing boards were using the keyword, its usage was repurposed to make dealing with some unique PCI devices easier. The particular case here is Intel’s PMC (Power Management Controller). During the FSP-S run, the PMC device is made hidden, meaning that its config space looks as if there is no device there (Vendor ID reads as 0xFFFF_FFFF). However, the device does have fixed resources, both MMIO and I/O. These were previously recorded in different places (MMIO was typically an SA fixed resource, and I/O was treated as an LPC resource). With this change, when a device in the tree is marked as ‘hidden’, it is not probed (pci_probe_dev()) but rather assumed to exist so that its resources can be placed in a more natural location. This also adds the ability for the device to participate in SSDT generation.

Tools for generating SPDs for LP4x memory on TGL and JSL

A set of new tools gen_spd.go and gen_part_id.go are added to automate the process of generating SPDs for LP4x memory and assigning hardware strap IDs for memory parts used on TGL and JSL based boards. The SPD data obtained from memory part vendors has to be massaged to format it correctly as per JEDEC and Intel MRC expectations. These tools take a list of memory parts describing their physical attributes as per their datasheet and convert those attributes into SPD files for the platforms. More details about the tools are added in README.md.

New version of SMM loader

A new version of the SMM loader which accommodates platforms with over 32 CPU threads. The existing version of SMM loader uses a 64K code/data segment and only a limited number of CPU threads can fit into one segment (because of save state, STM, other features, etc). This loader extends beyond the 64K segment to accommodate additional CPUs and in theory allows as many CPU threads as possible limited only by SMRAM space and not by 64K. By default this loader version is disabled. Please see cpu/x86/Kconfig for more info.

Address Sanitizer

coreboot now has an in-built Address Sanitizer, a runtime memory debugger designed to find out-of-bounds access and use-after-scope bugs. It is made available on all x86 platforms in ramstage and on QEMU i440fx, Intel Apollo Lake, and Haswell in romstage. Further, it can be enabled in romstage on other x86 platforms as well. Refer ASan documentation for more info.

Initial support for x86_64

The x86_64 code support has been revived and enabled for QEMU. While it started as PoC and the only supported platform is an emulator, there’s interest in enabling additional platforms. It would allow to access more than 4GiB of memory at runtime and possibly brings optimised code for faster execution times. It still needs changes in assembly, fixed integer to pointer conversions in C, wrappers for blobs, support for running Option ROMs, among other things.

Preparations to minimize enabling PCI bus mastering

For security reasons, bus mastering should be enabled as late as possible. In coreboot, it’s usually not necessary and payloads should only enable it for devices they use. Since not all payloads enable bus mastering properly yet, some Kconfig options were added as an intermediate step to give some sort of “backwards compatibility”, which allow enabling or disabling bus mastering by groups.

Currently available groups are:

  • PCI bridges
  • Any devices

For now, “Any devices” is enabled by default to keep the traditional behaviour, which also includes all other options. This is currently necessary, for instance, for libpayload-based payloads as the drivers don’t enable bus mastering for PCI bridges.

Exceptional cases, that may still need early bus master enabling in the future, should get their own per-reason Kconfig option. Ideally before the next release.

Early runtime configurability of the console log level

Traditionally, we didn’t allow the log level of the romstage console to be changed at runtime (e.g. via get_option()). It turned out that the technical constraints for this (no global variables in romstage) vanished long ago, though. The new behaviour is to query get_option() now from the second stage that uses the console on. In other words, if the bootblock already enables the console, the romstage log level can be changed via get_option(). Keeping the log level of the first console static ensures that we can see console output even if there’s a bug in the more involved code to query options.

Resource allocator v4

A new revision of resource allocator v4 is now added to coreboot that supports mutiple ranges for allocating resources. Unlike the previous allocator (v3), it does not use the topmost available window for allocation. Instead, it uses the first window within the address space that is available and satisfies the resource request. This allows utilization of the entire available address space and also allows allocation above the 4G boundary. The old resource allocator v3 is still retained for some AMD platforms that do not conform to the requirements of the allocator.

Deprecations

PCI bus master configuration options

In order to minimize the usage of PCI bus mastering, the options we introduced in this release will be dropped in a future release again. For more details, please see Preparations to minimize enabling PCI bus mastering.

Resource allocator v3

Resource allocator v3 is retained in coreboot tree because the following platforms do not conform to the requirements of the resource allocation i.e. not all the fixed resources of the platform are provided during the read_resources() operation:

  • northbridge/amd/pi/00630F01
  • northbridge/amd/pi/00730F01
  • northbridge/amd/pi/00660F01
  • northbridge/amd/agesa/family14
  • northbridge/amd/agesa/family15tn
  • northbridge/amd/agesa/family16kb

In order to have a single unified allocator in coreboot, this notice is being added to ensure that the platforms listed above are fixed before the next release. If there is interest in maintaining support for these platforms beyond the next release, please ensure that the platforms are fixed to conform to the expectations of resource allocation.

[GSoC] libgfxinit: Add support for Bay Trail

Hello everyone. I’ve been working on adding Bay Trail support to libgfxinit as a GSoC project. Yes, as I don’t usually talk much outside of IRC and Gerrit, I would imagine this post would come up as a surprise to most people. Despite the journey being way more difficult than initially foreseen, I eventually managed to get most of what I could test on Bay Trail working, with next to no spaghetti-looking code.

The commits adding Bay Trail support to libgfxinit and integration with coreboot can be retrieved with this Gerrit query. Additionally, the coreboot port for the Asrock Q1900M mainboard used for testing can be found on this Gerrit change.

I ran into several problems while working on this GSoC project, and submitted various fixes and improvements. Links to these commits can be found in later sections. Strictly speaking, these commits are not directly related to this GSoC project, but they spurred when working on GSoC.

Unfortunately, I ran into multiple setbacks, which precluded me from completing everything I had originally planned within the GSoC timeframe:

  • Since I only managed to fix some bugs last-minute, the code has not been formally verified yet. Nevertheless, formally verifying the code before it works and has been reviewed is rather pointless, since it needs to be verified again after amending it.
  • DisplayPort and integrated panel support could not be tested due to inaccessible hardware. I had to take a plane from the university campus to home on June and it was impossible to squeeze both the Asrock Q1900M and the Asus X551MA in my luggage. I decided to bring the Asrock Q1900M, as it is more compact and easier to work with.
  • There was no time left to work on Braswell support. While Bay Trail and Braswell are somewhat related, there are many differences regarding the undocumented parts of the hardware, and there’s even less documentation.

Undeterred by any misfortunes, I am going to finish what I started, come hell or high water.

Project details

libgfxinit is a graphics initialization (aka modesetting) library for embedded environments. It currently supports only Intel hardware, more specifically the Intel Core processor line. It can query and set up most kinds of displays based on their EDID information. You can, however, also specify particular mode lines.

Support for the Intel Bay Trail platform is was missing in libgfxinit. The code hasn’t landed upstream yet, so one would need to fetch it from Gerrit in order to use it. This involves fetching the libgfxinit patches first, using the checkout download option on CB:42359 (the topmost change), and then cherry-picking CB:44071 and CB:44072 into coreboot. CB:44938 and CB:39658 show how to enable libgfxinit for Bay Trail mainboards. Since the available video ports is mainboard-specific, gma-mainboard.ads needs to be adjusted accordingly.

Trials and Tribulations

The hardware is cursed

Getting software to work usually takes some testing, but when said software interacts with hardware, testing becomes essential. And when said hardware is largely undocumented, testing is pretty much the only option. The Display chapters of the graphics programming manuals for these platforms lack the information that matters for libgfxinit. Even the Bay Trail documentation turned out to be incomplete, especially regarding the display PHY and PLL registers. When working on libgfxinit, I soon got Bay Trail to show something on a monitor. However, making that work reliably took much longer than I expected. This was mainly because I needed to spend at least a day or two without looking at the code to see what was wrong with it.

Said PHY and PLL registers are hidden within IOSF-SB, a sideband interconnect network accessed through a mailbox-style interface. To access a register, not only does one need to read or write the register contents, but also needs to program the destination port (address of the hardware block), opcode (which type of read or write) and register offset, and then poll a busy bit until the operation is complete. Of course, this register access mechanism is not described in the graphics documentation, so the only references are existing graphics drivers. Reading someone else’s code in order to understand what documentation should say is, at best, downright painful.

After I managed to get something to show on the screen, I noticed that this would only work on very specific system states. In addition, manually (using the intel_reg utility) writing several undocumented registers before running gfx_test would sometimes help. I eventually figured out that most of the accesses to undocumented registers did not have any effect, because of a blunder in the IOSF accessor library I wrote: I messed up the bitfields when assembling the request register (contains the target port, opcode and some always-one bits), so the accesses would often end up going to the wrong port.

There’s always more bugs

The Bay Trail code in coreboot was only used by a single mainboard: the Google Rambi chromebook/chromebox/chromebase family. Memory initialization is done by a binary-only executable, which contains Intel’s MRC (Memory Reference Code). This binary is simply called “MRC binary” or mrc.bin (the file’s name). However, it is actually an ELF binary, and the Makefile in coreboot will place it at a different offset depending on the file extension. So, Bay Trail has a mrc.elf instead: using the mrc.bin name will place the binary at the wrong offset, and won’t work.

Once this mystery was solved, the MRC on the Asrock Q1900M would not detect any DIMMs. Turns out SMBus support in MRC is broken, so one needs to read the SPD contents into a buffer, and then pass that buffer to the MRC. CB:44092 takes care of that.

Even then, MRC would still refuse to work on the Asrock Q1900M. After some digging, it is because it checks the memory type in the SPD, and bails out if they are not SO-DIMMs or do not support 1.35V operation. The Asrock Q1900M uses full-size desktop UDIMMs, which may not always support 1.35V operation. CB:39568 patches the necessary values in the SPD buffer so that MRC will function as intended.

Externally-induced translocation

I don’t mind running errands or going out in general. However, I do utterly despise having to move and live elsewhere: I have to pack my computers and parts, and I have lots of them. I live in an archipelago, I don’t have my own house nor car yet, and my parents’ home and the university campus aren’t on the same island. Dad usually comes with his car at the start/end of the school year, as I need to move lots of stuff. Oh, and my parents are divorced, so my sister and I go back and forth when not abroad.

Because of the coronavirus outbreak, in-person lessons were suspended for the rest of the academic year. Most people living in the university campus (there’s a students’ residence in there) went back home almost immediately. I didn’t, because I didn’t feel like taking a plane amidst the outbreak and preferred to stay in my cozy server dorm room. However, as there were no more in-person lessons, the residence had to close, and I eventually had to leave. Moreover, Dad couldn’t come this time because he was overwhelmed by work (he was unable to work during lockdown, so everything piled up until the lockdown ended). So, I had to take a plane and leave most of my stuff in the residence, including all of my monitors with digital inputs and one of my two Bay Trail machines, which I had planned on using for GSoC.

And if that wasn’t enough, I’ve had to pack my things again, every week. This means each week only had six useful days, at best. This, plus everything else going on at home, quickly burned me up. It reached a point where I couldn’t bear any longer and had to take a two-week break from coreboot development.

Conclusion

Although there were many unforeseen hurdles and problems around every corner, I would still call this a huge success. Just like university assignments, it has been rushed down to the last minute.