Hello everyone, I am Abdullah and I am one of the GSoC students for 2019. My project is to implement kernel address sanitizer (KASAN) in coreboot. My mentor for this project is Werner Zeh. He has been extremely helpful throughout by guiding me whenever I got stuck, answering my questions and reviewing my patches. I want to thank him for his support.
KASan:
KASan is a dynamic memory error detector designed by developers at Google. KASan can find use-after-free and out-of-bound errors. Implementing KASan in coreboot will help detect these errors in coreboot. KASan works by keeping a shadow memory that records whether each byte is safe to access. Shadow memory is nothing but a part of memory set aside to map each byte of accessible memory with a bit. This means that the size of shadow memory would be 1/8 of the total accessible memory. Compile-time instrumentation is used to insert checks before each memory access. If an illegitimate memory access is found, it is reported by KASan. This feature can help to detect hard to find errors like stack overflows and overwritten pointers that are used for memory access at runtime.
Challenges:
KASan in coreboot is not as straightforward as in Linux kernel and it has many issues in this context. Coreboot implementation varies for different architectures. It also consists of stages that have different source languages and code/heap memory location. This means that both the accessible memory and shadow memory locations are not fixed. In some cases, memory is very limited, not giving any space for shadow memory (like bootblock or even romstage at times). These are the highlights of the problems we have to deal with when adding KASan to coreboot.
Progress:
We started off by implementing KASan in the relatively easier stage i.e. ramstage. Ramstage has the benefit of using DRAM available. This means we have enough memory to implement our shadow memory and we are even free to place the ramstage on the same address across several platforms. The later you cannot do for example with stages that execute out of cache. The ramstage code is still in the process of review and I will share it once it is done.
Hi everyone. As stated in my previous blogpost, I have been working on a FS loader for Intel Flash Descriptor (IFD) images. The IFD is used on Intel x86 platforms to define various regions in the SPI flash. These may include the Intel ME firmware region, BIOS region, Gigabit ethernet firmware region, etc. The IFD also defines read/write permissions for each flash region, and it may also contain various configurable chipset parameters (PCH straps). Additional information about the firmware descriptor can be found in this helpful post by plutomaniac on the Win-Raid forum, as well as these slides from Open Security Training.
For a filesystem loader, the flash regions are exposed as files. FLMAP0 in the descriptor map and the component/region sections are parsed to determine the base and limit addresses for each region; both IFD v1/v2 (since Skylake) are supported. Ghidra supports nested filesystem loaders, so the FMAP and CBFS loaders that I’ve previously written can be used for parsing the BIOS region.
If you encounter any issues with the IFD FS loader, please feel free to submit an issue report in the GitHub repository.
Plans for this week
I have started working on a filesystem loader for UEFI firmware volumes. In conjunction with the IFD loader, this will allow UEFI firmware images to be imported for analysis in Ghidra (behaving somewhat similar to the excellent UEFITool).
Hello again, I’m Asami. I’ve just finished 4 weeks as a GSoC student. I’m currently debugging the implementation of my main project, which is adding QEMU/AArch64 support. I can see nothing output right now when I start a QEMU with the coreboot.rom that has my implementation. It means there is something wrong before a hardware initialization has finished. In this article, I’m going to talk about what I found while debugging the bootblock for ARMv8.
Code Path of Bootblock Stage
The bootblock is executed just after CPU reset and it is almost written by assembly language. The main task is to set up a C-environment. The basic code path for ARMv8 from the beginning the bootblock to the romstage is:
_start() at src/arch/arm64/armv8/bootblock.S
arm64_init_cpu() at src/arch/arm64/armv8/cpu.S
main() at srclib/bootblock.c
run_romstage() at src/lib/prog_loaders.c
prog_run() at src/lib/prog_ops.c
arch_prog_run() at src/arch/arm64/boot.c
main() at src/arch/arm64/romstage.c // The entry point of the romstage
You can use your custom _start function instead of the common _start function by CONFIG_BOOTBLOCK_CUSTOM=y and adding bootblock-y += bootblock_custom.S which is your custom assembly file.
The Reason Why Execution Stopped inside arm64_init_cpu()
I found that an execution stopped inside the arm64_init_cpu function for some reason. The line that has some problem is mrs x22, sctlr_el3 . MRS instruction can read a system control register and store the value into a general purpose register. So this line means to store the value of SCTLR_EL3 into the X22 register.
According to the “ARM Architecture Reference Manual ARMv8, for ARMv8-A architecture profile”, the purpose of SCTLR_EL3 is
Provides top level control of the system, including its memory system, at EL3. This register is part of the Other system control registers functional group.
Also, SCTLR_EL3 is accessible only from EL3 mode. EL3 is the highest privileged mode that a low-level firmware, including the secure monitor, works on it.
Next, I checked the current mode via mrs x0, CurrentEL. CurrentEL is a register that holds the current exception level. The result of CurrentEL was 0x04, which means the program works on EL1 mode. EL1 is the mode that an operating system kernel typically described as privileged. I didn’t have the right to access SCTLR_EL3. That’s why an execution stopped.
Ideas to Solve EL3 Issue
I considered 2 solutions:
Use only EL1 registers
Run QEMU in EL3
Firstly, I tried to use only EL1 registers. I replaced arm64_init_cpu with arm64_init_cpu_el1 that is a new function I created. Then I replaced SCTLR_EL3 with SCTLR_EL1 and TLBI ALLE3 with TLBI VMALLE1. It seems to work well but still, there was nothing output.
Secondly, I tried to run QEMU in EL3 that is enabled by -machine flag. QEMU can work on EL2 with -machine virtualization=on and EL3 with -machine secure=on to enable EL3. The following command works well for me.
Hello again! If you recall from my last post, the schedule this week is to fix the issues in northbridge/via and southbridge. However, Coverity is going through a major internal upgrade, and so the issue tracker has been offline all week. Luckily though I was able to fix most of these issues last week, so assuming the upgrade finishes soon I won’t be behind schedule. In the mean time, I decided to try flashing coreboot onto my T500, since the last component I was waiting for arrived last week. Here is a little mini-guide to my (sometimes harrowing) flashing experience.
Supplies
ThinkPad T500
BeagleBone Black
5V 2A power adapter for the BBB
Jumper Wires
Pomona 5252 Test Clip
Atheros AR9462 Wireless Card
Updating the EC
It is generally recommended to update the embedded controller firmware before flashing coreboot, which can only be done during a Lenovo BIOS update. (Unlike Chromebooks, ThinkPads unfortunately do not have open source EC’s.) I was able to find a copy of the latest BIOS on the Lenovo EOL Portal, and attempted to perform an update … which froze and crashed halfway through. Uh oh. This is OK, as long as I don’t restart the computer I can just try flashing it again, right? Wrong! The next time I tried it Windows ran into a fatal error and decided to force a restart for me (gah!). Upon booting it up again, I was met with absolutely nothing, because the screen wouldn’t even turn on. More than a little concerned that I had bricked it, I searched through online forums until I stumbled across the Crisis Recovery tool. Apparently, old ThinkPads have a method to force-update the BIOS from an external USB stick or floppy (if you have one of those lying around). The recovery tool had to be run in Windows XP Service Pack 3 emulation mode, and seemed to format the USB correctly. My ThinkPad wasn’t so impressed, and obstinately refused to recognize the stick. As a last hope, I asked around on IRC what to do, and Nico Huber informed me that the ThinkPad was likely not dead, and that I could just proceed with flashing coreboot anyway. Well, here goes nothing.
Building Coreboot
So we’re going to flash coreboot, but what options do I pick when compiling it? I scoured around the internet to find tutorials for flashing coreboot onto a T500 and other related ThinkPads, but they all recommended different options, sometimes contradictory. Hmmmm. Once again going back to IRC, Angel Pons helped me configure a very minimal build.
General setup ---> [*] Use CMOS for configuration values
---> [*] Allow use of binary-only repository
Mainboard ---> Mainboard vendor ---> Lenovo
---> Mainboard model ---> ThinkPad T500
Devices ---> Display ---> Linear "high-resolution" framebuffer
Now, the T500 is a very special laptop, in that it can run coreboot without any binary blobs at all. However, I decided to enable microcode updates anyway, since they provide important stability improvements (like not crashing). This laptop also comes with an Intel ME which can be completely wiped, but I decided to leave that for later. (Now that I know coreboot works, there will be a follow-up post in several weeks when I do that.)
Disassembly and Flashing
Like most laptops, the flash IC of the T500 is locked from the factory, and requires an initial external flash to install coreboot (afterwards, subsequent flashes can be done internally). This requires disassembling the laptop to access the SOIC-16, which is buried in the bowels of the T500 case and requires a complete tear-down to access. The Libreboot T500 page gives you a feel for the amount of work required to extract the motherboard, which along with the hardware maintenance manual I referred to extensively.
With the motherboard extracted from the case, the next step is to attach the Pomona 5252 to the SOIC-16 and jumper it to the BBB, which was all made very easy by this X200 guide. Somewhat blithely following the previous guide, I set up an old ATX PSU to provide 3.3v to the flash chip. However, whenever I connected it to the test clip, it would always power itself off. Strange. Going back to IRC, Nico informed me that this is in fact A VERY BAD AND DANGEROUS THING TO DO. THE INTERNET IS LYING – DO NOT USE AN ATX PSU, YOU COULD FRY YOUR MOTHERBOARD! Oops. After puzzling over how to provide enough power to the chip without the PSU, Patrick Rudolph chimed in that a) the T500 motherboard is basically indestructible (whew!), and b) the flasher itself should be able to provide enough power. Hooking the 3.3v cable into the BBB instead, I tried reading the flash chip.
$ flashrom -p linux_spi:dev=/dev/spidev1.0,spispeed=512
(a bunch of output that I forgot to write down)
It works! Even with a bricked Lenovo BIOS, it is still recommended to keep a backup, so next we read the old factory ROM.
Do this three times with three distinct images, and compare their SHAsums to make sure they are all identical (otherwise the connection might be faulty). If they all match, keep one as a backup.
Note that because I left the ME as-is, it is important to only flash the BIOS region, not the entire chip.
Reassembly and Testing
Sadly, no instant gratification here – I had to reassemble half the laptop before I could test booting it up. However, after doing so and gingerly pressing the power button, I was greeted by the lovely SeaBIOS boot menu. It actually worked! Huzzah! Finishing reassembly, I replaced the factory Intel wireless card with an Atheros AR9462, which can run without any binary firmware. After installing Debian, I now have a laptop running completely free and open source software, all the way from the BIOS up (well, except for the ME, but I’ll fix that later).
For the final icing on the cake, here is a fresh board status report for the T500. Many thanks to everyone who helped me in this process.
During the previous week, I worked on additional filesystem loaders to support parsing Flash Map (FMAP) images and the coreboot file system (CBFS). As of this week, these FS loaders are mostly complete, and can be used to import raw binaries within compiled coreboot ROMs. Support for CBFS file compression (with either LZMA or LZ4) is also implemented; compressed files will be automatically extracted. Here are some screenshots of the new FS loaders:
While these might not be the most useful FS loaders (as FMAP and CBFS are mainly used by coreboot itself), I gained additional familiarity with Ghidra’s plugin APIs for FS loaders. This will be useful, as I will be writing additional FS loaders for this project.
Plans for this week
I’ll continue to make minor changes to the existing FS loaders (various cleanups/etc). I’ll also start to write a FS loader for parsing ROMs with an Intel firmware descriptor (IFD), which shouldn’t be too complicated. After this is completed, I plan on writing a FS loader for UEFI firmware volumes (ideally similar to UEFITool or uefi-firmware-parser). I anticipate that this loader will be more complex, so I’ve reserved additional time to ensure its completion.
Hello everyone. I am Asami and a student for this year’s GSoC project.My project is adding a new mainboard QEMU/AArch64 to make it easier for coreboot developers to support new boards for ARMv8. I’ve already written a small patch to enable building a sample program with libpayload for ARM architecture. Also, I’ve read the implementation of coreboot (main code path) for ARMv7 and QEMU (qemu/hw/arm/vexpress.c). Now, I just created a new CL for my main project and I started to read the implementation of the target machine of AArch64 (qemu/hw/arm/virt.c).
In this article, I’m going to talk about my mistakes when I developed coreboot. I hope it helps for beginners of coreboot development. The target board is QEMU/ARM and the CPU is ARMv7.
“ERROR: Ramstage region _postram_cbfs_cache overlapped by: fallback/payload”
I faced this error when I built coreboot.rom for QEMU/ARM with the coreinfo which is a small informational payload for coreboot. The cause is that the coreinfo doesn’t support ARM architecture and then the payload is compiled as a 32-bit x86.
Make sure that your payload is your target architecture. You need to use other executable files instead of the coreinfo when you want to use architectures other than x86. We provide the libpayload which is a small BSD-licensed static library.
The details of the error is:
$ make
...(omitted)....
W: Written area will abut bottom of target region: any unused space will keep its current contents
CBFS fallback/romstage
CBFS fallback/ramstage
CBFS config
CBFS revision
CBFS fallback/payload
INFO: Performing operation on 'COREBOOT' region...
ERROR: Ramstage region _postram_cbfs_cache overlapped by: fallback/payload
Makefile.inc:1171: recipe for target 'check-ramstage-overlaps' failed
make: *** [check-ramstage-overlaps] Error 1
“ERROR: undefined reference to ‘_ttb'” and “ERROR: undefined reference to ‘_ettb'”
This errors might happen when you build coreboot.rom by `make` at root directory. In this case, You need to add TTB() at your memleyout.ld.
TTB is a translation table base address for MMU. TTBR0 and TTBR1 (TTB registers) hold the start point of TTB. We can put TTB anywhere in memory as long as we store the address to TTBR.
According to the “ARM Architecture Reference Manual ARMv7-A and ARMv7-R edition”, the difference between TTBR0 and TTBR1 is:
(B3-1345) When a PL1&0 stage 1 MMU is enabled, TTBR0 is always used. If TTBR1 is also used then:
– TTBR1 is used for the top part of the input address range
– TTBR0 is used for the bottom part of the input address range
(B4-1724) TTBCR determines which of the Translation Table Base Registers, TTBR0 or TTBR1, defines the base address for a translation table walk required for the stage 1 translation of a memory access from any mode other than Hyp mode.
TTBR0 is basically used for user processes and TTBR1 is used for kernel. However, Linux kernel only uses TTBR0 to reduce the time of context switch. (I just heard that Linux kernel starts to use TTBR1 because of security reasons such as Meltdown and Spectre.)
In coreboot, mmu_init() sets TTBR registers in arch/arm/armv7/mmu.c.
Fails to build a sample program with libpayload
We provide the libpayload which is a small BSD-licensed static library for coreboot and we also have a sample program to know how to use it. However, you might fail to build a sample program when you select the ARM architecture as a target with the following errors:
/usr/bin/ld: cannot represent machine `arm'
The reason why this problem happens is Makefile in the sample directory is old dated. So I created a CL to update current architectures that coreboot supports.
“Payload not loaded” happens when the load address of a payload is wrong. The load address should be placed in the RAM place where anyone can use. You can define the load address via CONFIG_LP_BASE_ADDRESS if you use a libpayload.
Whole operations for building coreboot.rom with a sample payload for QEMU/ARM are:
1. Build a libc and cross compiler environment.
// In coreboot/payloads/libpayload/
$ make distclean // Always needs when switching a mainboard.
$ cp configs/config.emulation-qemu-arm configs/defconfig // Or you can set up it via 'make menuconfig'
$ make defconfig
$ make
$ make install
2. Build a sample payload hello.elf.
// In coreboot/payloads/libpayload/sample
$ make // Make sure that Makefile is updated by https://review.coreboot.org/c/coreboot/+/33287
3. Build coreboot.rom with a sample payload.
// In coreboot/
$ make distclean // Always needs when switching a mainboard.
$ make menuconfig // or make defconfig
Select payload “payloads/libpayload/sample/hello.elf”
$ make
Make sure to do ‘make distclean’ before switching your board target
‘make distclean’ removes build artifacts and config files. The default archtecture in coreboot is x86, so you need to do ‘make distclean’ when you want to use other architectures.
Fails to update an existing CL on Gerrit
Gerrit is a code review tool used in coreboot project. I’m familiar with GitHub and I thought the operations of Gerrit are almost the same with the operations of GitHub, but it weren’t.
On GitHub, developers can create a commit for each update. On the other hand, developers using Gerrit need to amend their commit until it will be merged.
Commands to create a new CL are almost the same with the operations of GitHub:
Hello again! This is a continuation of my posts about fixing the Coverity issues in coreboot. This week’s plan was to tackle the 28 issues in northbridge/intel, which turned out to be much easier than I expected, since I’m already done! With that out of the way, I’m going to begin working on northbridge/via and southbridge. For the curious, here is the project timeline for entire summer. (I had wanted to include this in last week’s post, but hadn’t figured out how to do tables in WordPress yet.)
Week
Components
Issues
May 6 to 10
util
22
May 13 to 17
util, payloads
22
May 20 to 24
arch, drivers
20
May 27 to 31
commonlib, cpu, lib, mainboard
22
June 3 to 7
northbridge/amd
21
June 10 to 14
northbridge/intel
28
June 17 to 21
northbridge/via, southbridge
22
June 24 to 28
soc/intel
21
July 1 to 5
soc/rockchip, soc/nvidia
20
July 15 to 19
soc/misc, vendorcode/cavium
26
July 22 to 26
vendorcode/amd
21
July 29 to Aug 2
vendorcode/amd
21
Aug 5 to 9
vendorcode/amd
20
Aug 12 to 16
vendorcode/amd
20
Aug 19 to 23
vendorcode/amd
20
As you can see, there are a lot of issues in the AMD vendorcode. This consists primarily of AGESA, AMD’s framework for initialization of their 64 bit platforms (somewhat similar to Intel’s FSP). This code is somewhat … dense (someone on IRC described it as a “sea of abstraction”), so I made sure to leave plenty of time for it. As always, you can keep up to date on my current progress on Gerrit.
PS: As an extra bonus, here is a picture of my new BeagleBone Black!
I recently got a ThinkPad T500 to practice installing coreboot on, and I needed some sort of external programmer to flash the SOIC. There are many options available (flashrom has a whole list here), but a single-board computer like this is one of the closest you can get to “plug-and-play.” There are many other popular boards (notably the Raspberry Pi), but the BBB doesn’t require any binary blobs to boot, and is open source hardware too. The only thing I’m waiting for now is an Atheros ath9k wireless card, which runs without any binary firmware. (Hey, if you’re gonna go freedom, you gotta go all the way.)
Hello everyone! My name is Jacob Garber, and I am a student in this year’s GSoC 2019! My project is on making coreboot Coverity clean. Coverity is a free static-analysis tool for open source projects that searches for common coding mistakes and errors, such as buffer overruns, null pointer dereferences, and integer overflow. Coverity automatically analyzes the coreboot codebase and flags issues it finds, and my job is to classify them into bugs and false-positives and patch them if I can. You can check the Coverity overview for coreboot here, though seeing the issue tracker itself requires registration. At the beginning of the summer, coreboot had over 380 flagged issues, but it’s now down to 303, so we’re making progress! I plan to address 20-30 issues per week depending on the source component, which so far has gone surprisingly well (surprising, in the sense that coming into the summer I knew very little about coreboot or firmware development in general). For the curious, you can see the history and progress of all my changes on Gerrit. My mentors for this project are Patrick Georgi, Martin Roth, and David Hendricks, who have all been extremely helpful in guiding me through the development process, reviewing my patches, and answering my many questions. Thank you all.
Now, fixing Coverity bugs isn’t the only thing I’d like to do this summer. As I said before, I’d like to learn more about coreboot, and what better way to do that than installing it on a laptop! My current laptop is an old 2011 Macbook Air, which is surprisingly close to getting coreboot support (many thanks to Evgeny Zinoviev). However, I am (slightly) hesitant about installing yet-experimental firmware on my one and only development machine, so until then I picked up an old Thinkpad T500 to practice on. This laptop has the advantage of being able to run blob-free, and if in the very unlikely event I end up bricking it, who cares! (I mean, I’ll care, but it was a worthy sacrifice.) I also bought a BeagleBone Black to try out external flashing and was hoping to include a picture today, but the shipping was delayed. You’ll have to wait until next week!
Hi everyone. I’m Alex James (theracermaster on IRC) and I’m working on developing modules for Ghidra to assist with firmware reverse engineering as a part of GSoC 2019. Martin Roth and Raul Rangel are my mentors for this project; I would like to thank them for their support thus far.
Ghidra is an open-source software reverse engineering suite developed by the NSA, offering similar functionality to existing tools such as IDA Pro. My GSoC project aims to augment its functionality for firmware RE. This project will consist of three parts: a loader for PCI option ROMs, a loader for firmware images, and various scripts to assist with UEFI binary reverse engineering (importing common types, GUIDs, etc).
The source code for this project is available here.
Week 1
During my first week, I started implementing the filesystem loader for PCI option ROMs. This allows option ROMs (and their enclosed images) to be loaded into Ghidra for analysis. So far, option ROMs containing uncompressed UEFI binaries can be successfully loaded as PE32+ executables in Ghidra. The loader also calculates the entry point address for legacy x86 option ROMs.
Plans for this week
So far this week, I’ve worked on writing a simple JNI wrapper for the reference C implementation of the EFI decompressor from EDK2, and have used this to add support for compressed EFI images to the option ROM FS loader. Additionally, I plan on making further improvements to the option ROM loader for legacy option ROMs; while the entry point address is properly calculated, they still have to be manually imported as a raw binary.
The 4.9 release covers commit 532b8d5f25 to commit 7f520c8fe6
There is a pgp signed 4.9 tag in the git repository, and a branch will
be created as needed.
In the little more than 7 months since 4.8.1 we had 175 authors commit 2610 changes to master. The changes were, for the most part, all over the place, touching every part of the repository: chipsets, mainboards, tools, build system, documentation.
In that time we also had 70 authors made their first commit to coreboot: Welcome and to many more!
Finally, a big Thank You to all contributors who helped shape the coreboot project, community and code with their effort, no matter if through development, review, testing, documentation or by helping people asking questions on our venues like IRC or our mailing list.
Clean up
If there’s any topic to give to this release, “clean up” might be the most appropriate: There was lots of effort to bring the codebase into compliance with our coding style, to remove old idioms that we’d like to retire like the overloaded device_t data type, and to let features percolate through the entire tree to bring more uniformity to its parts.
For example, during the coreboot 4.4 cycle, coreboot gained the notion of mainboard variants to avoid duplication of code in rather similar mainboards.
Back then, this feature was developed and used mostly for the benefit of Chrome OS devices, but more recently the code for various Lenovo Thinkpads was deduplicated in the same way.
Another part of cleaning up our tree is improving our tools that help developers follow coding style and avoid mistakes, as well as the infrastructure we have for automated build tests and we’ve seen quite some activity in that space as well.
Documentation
Since the last release we also moved the documentation into the repository. No need for a special wiki account to edit the documentation, and by colocating sources and documentation, it’s easier to keep the latter in sync with the code, too.
This effort is still under way, which is why we still host the old wiki (now read-only) in parallel to the new documentation site that is rendered from coreboot.git’s Documentation/ directory.
Blobs handling
Another big change is in our blobs handling: Given that Intel now provides a reasonably licensed repository with FSP binaries, we were able to mirror it to coreboot.org and integrate it in the build system. This makes it easier to have working images out of the box for devices that depend on Intel’s proprietary init code.
As usual the blobs aren’t part of the coreboot tree and only downloaded with the USE_BLOBS options.
Deprecations
One of the first changes to coreboot after the 4.8 release was to remove boards that didn’t support certain new features and were apparently unmaintained, as discussed in the release notes of coreboot 4.6.
We didn’t follow up on all plans made back then to deprecate boards more aggressively: The board status reporting mechanism is still rather raw and therefore places quite a burden on otherwise sympathetic contributors of build results.
Also, there will be no deprecations after 4.10: Due to its slipping schedule, coreboot 4.9 is released rather late, and as a result 4.10 will only see about 4 months of development. We considered that a rather short timeframe in which to bring old boards up to new standards, and so the next deprecation cycle may be announced with 4.10 to occur after 4.11 is released, in late 2019.
General changes
Various code cleanups
Removed device_t in favor of struct device* in ramstage code
Removed unnecessary include directives
Improved adherence to coding style
Deduplicated boards by using the variants mechanism
Expand use of the postcar stage
Add bootblock compression capability: on systems that copy the bootblock from very slow flash to SRAM, allow adding a stub that decompresses the bootblock into SRAM to minimize the amount of flash reads
Rename the POWER8 architecture port to PPC64 to reflect that it isn’t limited to POWER8
Added support for booting FIT (uImage) payloads on arm64
Added SPI flash write protection API
Implemented on Winbond
Implemented TCPA log for measured boot
Implemented GDB support for arm64 architecture in libpayload
Dropped support for unmaintained code paths
Measured boot support
Added 56 mainboards
ASROCK G41C-GS
ASROCK G41M-GS
ASROCK G41M-S3
ASROCK G41M-VS3 R2.0
ASROCK H81M-HDS
ASUS P5QC
ASUS P5QL-PRO
ASUS P5Q-PRO
ASUS P8H61-M-LX
ASUS P8H61-M-PRO
CAVIUM CN8100-SFF-EVB
FACEBOOK WATSON
FOXCONN D41S
GIGABYTE GA-H61M-S2PV
GOOGLE ALEENA
GOOGLE AMPTON
GOOGLE ARCADA
GOOGLE ASUKA
GOOGLE BOBBA
GOOGLE BUDDY
GOOGLE CAREENA
GOOGLE CAROLINE
GOOGLE CASTA
GOOGLE CAVE
GOOGLE DELAN
GOOGLE DRAGONEGG
GOOGLE FLEEX
GOOGLE HATCH
GOOGLE KARMA
GOOGLE KUKUI
GOOGLE LIARA
GOOGLE MEEP
GOOGLE RAMMUS
GOOGLE SARIEN
GOOGLE SENTRY
HEWLETT PACKARD HP COMPAQ 8200 ELITE SFF PC
INTEL COFFEELAKE RVP11
INTEL COFFEELAKE RVP8
INTEL COFFEELAKE RVPU
INTEL DG41WV
INTEL ICELAKE RVPU
INTEL ICELAKE RVPY
INTEL WHISKEYLAKE RVP
LENOVO T431S
LENOVO THINKCENTRE A58
LENOVO W500
LENOVO W530
OPENCELLULAR ELGON
OPENCELLULAR ROTUNDU
OPENCELLULAR SUPABRCKV1
SIEMENS MC-APL2
SIEMENS MC-APL3
SIEMENS MC-APL4
SIEMENS MC-APL5
Dropped 71 mainboards
AAEON PFM-540I REVB
AMD DB800
AMD DBM690T
AMD F2950
AMD MAHOGANY
AMD NORWICH
AMD PISTACHIO
AMD SERENGETI-CHEETAH
ARTECGROUP DBE61
ASROCK 939A785GMH
ASUS A8N-E
ASUS A8N-SLI
ASUS A8V-E DELUXE
ASUS A8V-E SE
ASUS K8V-X
ASUS KFSN4-DRE K8
ASUS M2N-E
ASUS M2V
ASUS M2V MX-SE
BACHMANN OT200
BCOM WINNETP680
BROADCOM BLAST
DIGITALLOGIC MSM800SEV
GIGABYTE GA-2761GXDK
GIGABYTE M57SLI
GOOGLE KAHLEE
GOOGLE MEOWTH
GOOGLE PURIN
GOOGLE ROTOR
GOOGLE ZOOMBINI
HP DL145-G1
HP DL145-G3
IEI PCISA LX-800 R10
IEI PM LX2-800 R10
IEI PM LX-800 R11
INTEL COUGAR-CANYON2
INTEL STARGO2
IWILL DK8 HTX
JETWAY J7F2
JETWAY J7F4K1G2E
JETWAY J7F4K1G5D
KONTRON KT690
LINUTOP LINUTOP1
LIPPERT HURRICANE LX
LIPPERT LITERUNNER LX
LIPPERT ROADRUNNER LX
LIPPERT SPACERUNNER LX
LOWRISC NEXYS4DDR
MSI MS7135
MSI MS7260
MSI MS9185
MSI MS9282
NVIDIA L1-2PVV
SIEMENS SITEMP-G1P1
SUNW ULTRA40
SUNW ULTRA40M2
SUPERMICRO H8DME
SUPERMICRO H8DMR
TECHNEXION TIM5690
TECHNEXION TIM8690
TRAVERSE GEOS
TYAN S2912
VIA EPIA-CN
VIA EPIA-M700
VIA PC2500E
VIA VT8454C
WINENT MB6047
WINENT PL6064
WINNET G170
CPU changes
cpu/intel/model_2065x,206ax,haswell: Switch to POSTCAR_STAGE
cpu/intel/slot_1: Switch to different CAR setup
Dropped support for the FSP1.0 sandy-/ivy-bridge bootpath
SoC changes
Added Cavium CN81xx, Intel Ice Lake and Mediatek MT8183
Dropped Broadcom Cygnus, Lowrisc and Marvell mvmap2315
Northbridge changes
Dropped AMD K8, VIA CN700, VIA CX700, VIA VX800 because they lack EARLY_CBMEM support
intel/e7505: Moved to EARLY_CBMEM
nb/intel/i945,e7505,pineview,x4x,gm45,i440bx: Moved to POSTCAR_STAGE
nb/intel/i440bx, e7505: Moved to RELOCATABLE_RAMSTAGE
intel/x4x: Add DDR3 support
nb/intel/pineview: Speed up fetching SPD
nb/intel/i945,gm45,x4x,pineview: Use TSEG in SMI
Southbridge changes
sb/intel/i82801{g,i,j}x, lynxpoint: Use the common ACPI pirq generator
sb/intel/i82801{g,i,j}x: Use common code to set up SMM and for the smihandler
Use common functions for PMBASE configuration
Payload changes
Support initrd in uImage/FIT to be placed above 4GiB
Added documentation for uImage/FIT payloads
Toolchain
Update to gcc 8.1.0, binutils 2.30, IASL 20180810, clang 6