Hello everyone. It’s been a few weeks since I’ve written my last blog post, and during that time I’ve been working on the FS loader for UEFI firmware images. This FS loader aims to implement functionality similar to UEFITool in Ghidra.
As described in the previous blog post, Intel platforms divide the flash chip into several regions, including the BIOS region. On UEFI systems, the BIOS region is used to store UEFI firmware components, which are organized in a hierarchy. This hierarchy begins with UEFI firmware volumes, which consist of FFS (firmware file system) files. In turn, these FFS files can contain multiple sections. Firmware volumes can also be nested within FFS files. This helpful reference by Trammell Hudson as well as this presentation from OpenSecurityTraining have some additional information regarding UEFI firmware volumes.
For example, a UEFI firmware implementation could have a firmware volume specifically for the Driver eXecution Environment (DXE phase). Stored as FFS files, DXE drivers within the firmware volume could consist of a PE32 section to store the actual driver binary, as well as a UI section to store the name of the driver.
So far, I’ve implemented basic firmware volume parsing in the FS loader; I’ve pushed this to the GitHub repository. Currently, this doesn’t handle FFS file or section parsing.
FFS file and section parsing is still a work-in-progress, but here’s a preview:
This is mostly complete, but there are still some nasty bugs related to FFS alignment that I’m working on fixing. My focus for this week is to finish up this FS loader.
Update (2019-07-19)
I have committed support for UEFI FFS file/section parsing in the GitHub repo. Please open an issue report if you encounter any issues with it (such as missing files/sections that UEFITool or other tools parse without issues).
Hi, I’m Asami. My project “adding QEMU/AArch64 support to coreboot” is making good progress. I’ve almost done to write porting code and I’m now writing a new tool to make a FIT payload for QEMU/AArch64. Here is my CL. In this article, I’m going to talk about how to run C code in the bootblock stage.
The way to run C code before DRAM has been initialized is various in the coreboot project. The most famous and used in the x86 system is known as the Cache-As-Ram (CAR). CAR can be set up by 1. enable CPU cache 2. enable the ‘no eviction’ mode 3. change cache mode from write-through to write-back. ‘No-eviction’ means that the CPU doesn’t write any data to DRAM as long as the size of data is less than the CPU cache. Then, you can get all the data from the CPU cache. The implementation for CAR is src/cpu/intel/car/non-evict/cache_as_ram.S.
Another way to run C code especially implemented in ARM system is relocating the bootblock code to SRAM. System on a Chip (SoC) has an ARM CPU and often includes SRAM as a cache. For example, QEMU VExpress machine has SRAM is located at 0x48000000 that we can know in qemu/hw/arm/vexpress.c#L120.
However, what should the system that doesn’t have SRAM do? My target machine, QEMU virt machine, doesn’t have SRAM according to the implementation of qemu/hw/arm/virt.c. In this case, should we initialize DRAM earlier?
The answer is No but it’s only for my project. Because QEMU is not actual hardware so DRAM already works. I just need to relocate the bootblock code to DRAM directly. I believe that other ARM systems that have no SRAM should initialize DRAM earlier.
Hello again! It’s been three weeks since my last post (well, one of those weeks I was on vacation, so more like two weeks), so there are many many updates to write about. If you recall from my last post, Coverity has been down for maintenance, so I started looking around for other things to do. Here is a list of some of the highlights:
Coverity isn’t the only static analyzer we use – coreboot also runs nightly scans using the Clang Static Analyzer, an open source tool from the LLVM project. The results from these scans aren’t as detailed as Coverity and unfortunately contain many duplicates and false positives, but I was able to submit patches for some of the issues. Usability of the analyzer is also still somewhat limited, since we have no way of recording fixes for past issues or ignoring false positives. (A framework like CodeChecker might be able to help with this.)
coreboot will (hopefully) soon be (almost) VLA free! Variable length arrays (VLAs) were a feature added in C99 that allows the length of an array to be determined at run time, rather than compile time. While convenient, this is also very dangerous, since it allows use of an unlimited amount of stack memory, potentially leading to stack overflow. This is especially dangerous in coreboot, which often has very little stack space to begin with. Fortunately, almost all of the VLAs in coreboot can be replaced with small fixed-sized buffers, aside from one tricky exception that will require a larger rewrite. This patch was inspired by a similar one for Linux, and is currently under review.
Similar to the above, coreboot will also soon be free of implicit fall throughs! Switches in C have the unfortunate property of each case statement implicitly falling through to the next one, which has led to untold number of bugs over the decades and at least 38 in coreboot itself (by Coverity’s count). GCC however recently added the -Wimplicit-fallthrough warning, which we can use to banish them for good. As of this patch, all accidental fall throughs have been eliminated, with the intentional ones marked with the /* fall through */ comment. Hopefully this is the last we see of this type of bug.
coreboot now has a single implementation of <stdint.h>. Previously, each of the six supported architectures (arm, arm64, mips, ppc64, riscv, and x86) had their own custom headers with arch-specific quirks, but now they’ve all been tidied up and merged together. This should make implementing further standard library headers easier to do (such as <inttypes.h>).
Finally, many coreboot utilities and tools such as flashrom, coreinfo, nvramtool, inteltool, ifdtool, and cbmem have stricter compiler warnings, mostly from -Wmissing-prototypes and -Wextra. Fixing and adding compiler warnings like these are very easy to do, and make great first time contributions (-Wconversion and -Wsign-compare in particular can always use more work).
That’s it for this week! Thankfully Coverity is back online now, so expect more regular blog posts in the future.
The laptop industry was still in its infancy back in 1990, but it still faced a core problem that we do today - power and thermal management are hard, but also critical to a good user experience (and potentially to the lifespan of the hardware). This is in the days where DOS and Windows had no memory protection, so handling these problems at the OS level would have been an invitation for someone to overwrite your management code and potentially kill your laptop. The safe option was pushing all of this out to an external management controller of some sort, but vendors in the 90s were the same as vendors now and would do basically anything to avoid having to drop an extra chip on the board. Thankfully(?), Intel had a solution.
The 386SL was released in October 1990 as a low-powered mobile-optimised version of the 386. Critically, it included a feature that let vendors ensure that their power management code could run without OS interference. A small window of RAM was hidden behind the VGA memory[1] and the CPU configured so that various events would cause the CPU to stop executing the OS and jump to this protected region. It could then do whatever power or thermal management tasks were necessary and return control to the OS, which would be none the wiser. Intel called this System Management Mode, and we've never really recovered.
Step forward to the late 90s. USB is now a thing, but even the operating systems that support USB usually don't in their installers (and plenty of operating systems still didn't have USB drivers). The industry needed a transition path, and System Management Mode was there for them. By configuring the chipset to generate a System Management Interrupt (or SMI) whenever the OS tried to access the PS/2 keyboard controller, the CPU could then trap into some SMM code that knew how to talk to USB, figure out what was going on with the USB keyboard, fake up the results and pass them back to the OS. As far as the OS was concerned, it was talking to a normal keyboard controller - but in reality, the "hardware" it was talking to was entirely implemented in software on the CPU.
Since then we've seen even more stuff get crammed into SMM, which is annoying because in general it's much harder for an OS to do interesting things with hardware if the CPU occasionally stops in order to run invisible code to touch hardware resources you were planning on using, and that's even ignoring the fact that operating systems in general don't really appreciate the entire world stopping and then restarting some time later without any notification. So, overall, SMM is a pain for OS vendors.
Change of topic. When Apple moved to x86 CPUs in the mid 2000s, they faced a problem. Their hardware was basically now just a PC, and that meant people were going to try to run their OS on random PC hardware. For various reasons this was unappealing, and so Apple took advantage of the one significant difference between their platforms and generic PCs. x86 Macs have a component called the System Management Controller that (ironically) seems to do a bunch of the stuff that the 386SL was designed to do on the CPU. It runs the fans, it reports hardware information, it controls the keyboard backlight, it does all kinds of things. So Apple embedded a string in the SMC, and the OS tries to read it on boot. If it fails, so does boot[2]. Qemu has a driver that emulates enough of the SMC that you can provide that string on the command line and boot OS X in qemu, something that's documented further here.
What does this have to do with SMM? It turns out that you can configure x86 chipsets to trap into SMM on arbitrary IO port ranges, and older Macs had SMCs in IO port space[3]. After some fighting with Intel documentation[4] I had Coreboot's SMI handler responding to writes to an arbitrary IO port range. With some more fighting I was able to fake up responses to reads as well. And then I took qemu's SMC emulation driver and merged it into Coreboot's SMM code. Now, accesses to the IO port range that the SMC occupies on real hardware generate SMIs, trap into SMM on the CPU, run the emulation code, handle writes, fake up responses to reads and return control to the OS. From the OS's perspective, this is entirely invisible[5]. We've created hardware where none existed.
The tree where I'm working on this is here, and I'll see if it's possible to clean this up in a reasonable way to get it merged into mainline Coreboot. Note that this only handles the SMC - actually booting OS X involves a lot more, but that's something for another time.
[1] If the OS attempts to access this range, the chipset directs it to the video card instead of to actual RAM.
[2] It's actually more complicated than that - see here for more.
[3] IO port space is a weird x86 feature where there's an entire separate IO bus that isn't part of the memory map and which requires different instructions to access. It's low performance but also extremely simple, so hardware that has no performance requirements is often implemented using it.
[4] Some current Intel hardware has two sets of registers defined for setting up which IO ports should trap into SMM. I can't find anything that documents what the relationship between them is, but if you program the obvious ones nothing happens and if you program the ones that are hidden in the section about LPC decoding ranges things suddenly start working.
[5] Eh technically a sufficiently enthusiastic OS could notice that the time it took for the access to occur didn't match what it should on real hardware, or could look at the CPU's count of the number of SMIs that have occurred and correlate that with accesses, but good enough
comments
Hello everyone, I am Abdullah and I am one of the GSoC students for 2019. My project is to implement kernel address sanitizer (KASAN) in coreboot. My mentor for this project is Werner Zeh. He has been extremely helpful throughout by guiding me whenever I got stuck, answering my questions and reviewing my patches. I want to thank him for his support.
KASan:
KASan is a dynamic memory error detector designed by developers at Google. KASan can find use-after-free and out-of-bound errors. Implementing KASan in coreboot will help detect these errors in coreboot. KASan works by keeping a shadow memory that records whether each byte is safe to access. Shadow memory is nothing but a part of memory set aside to map each byte of accessible memory with a bit. This means that the size of shadow memory would be 1/8 of the total accessible memory. Compile-time instrumentation is used to insert checks before each memory access. If an illegitimate memory access is found, it is reported by KASan. This feature can help to detect hard to find errors like stack overflows and overwritten pointers that are used for memory access at runtime.
Challenges:
KASan in coreboot is not as straightforward as in Linux kernel and it has many issues in this context. Coreboot implementation varies for different architectures. It also consists of stages that have different source languages and code/heap memory location. This means that both the accessible memory and shadow memory locations are not fixed. In some cases, memory is very limited, not giving any space for shadow memory (like bootblock or even romstage at times). These are the highlights of the problems we have to deal with when adding KASan to coreboot.
Progress:
We started off by implementing KASan in the relatively easier stage i.e. ramstage. Ramstage has the benefit of using DRAM available. This means we have enough memory to implement our shadow memory and we are even free to place the ramstage on the same address across several platforms. The later you cannot do for example with stages that execute out of cache. The ramstage code is still in the process of review and I will share it once it is done.
Hi everyone. As stated in my previous blogpost, I have been working on a FS loader for Intel Flash Descriptor (IFD) images. The IFD is used on Intel x86 platforms to define various regions in the SPI flash. These may include the Intel ME firmware region, BIOS region, Gigabit ethernet firmware region, etc. The IFD also defines read/write permissions for each flash region, and it may also contain various configurable chipset parameters (PCH straps). Additional information about the firmware descriptor can be found in this helpful post by plutomaniac on the Win-Raid forum, as well as these slides from Open Security Training.
For a filesystem loader, the flash regions are exposed as files. FLMAP0 in the descriptor map and the component/region sections are parsed to determine the base and limit addresses for each region; both IFD v1/v2 (since Skylake) are supported. Ghidra supports nested filesystem loaders, so the FMAP and CBFS loaders that I’ve previously written can be used for parsing the BIOS region.
If you encounter any issues with the IFD FS loader, please feel free to submit an issue report in the GitHub repository.
Plans for this week
I have started working on a filesystem loader for UEFI firmware volumes. In conjunction with the IFD loader, this will allow UEFI firmware images to be imported for analysis in Ghidra (behaving somewhat similar to the excellent UEFITool).
Hello again, I’m Asami. I’ve just finished 4 weeks as a GSoC student. I’m currently debugging the implementation of my main project, which is adding QEMU/AArch64 support. I can see nothing output right now when I start a QEMU with the coreboot.rom that has my implementation. It means there is something wrong before a hardware initialization has finished. In this article, I’m going to talk about what I found while debugging the bootblock for ARMv8.
Code Path of Bootblock Stage
The bootblock is executed just after CPU reset and it is almost written by assembly language. The main task is to set up a C-environment. The basic code path for ARMv8 from the beginning the bootblock to the romstage is:
_start() at src/arch/arm64/armv8/bootblock.S
arm64_init_cpu() at src/arch/arm64/armv8/cpu.S
main() at srclib/bootblock.c
run_romstage() at src/lib/prog_loaders.c
prog_run() at src/lib/prog_ops.c
arch_prog_run() at src/arch/arm64/boot.c
main() at src/arch/arm64/romstage.c // The entry point of the romstage
You can use your custom _start function instead of the common _start function by CONFIG_BOOTBLOCK_CUSTOM=y and adding bootblock-y += bootblock_custom.S which is your custom assembly file.
The Reason Why Execution Stopped inside arm64_init_cpu()
I found that an execution stopped inside the arm64_init_cpu function for some reason. The line that has some problem is mrs x22, sctlr_el3 . MRS instruction can read a system control register and store the value into a general purpose register. So this line means to store the value of SCTLR_EL3 into the X22 register.
According to the “ARM Architecture Reference Manual ARMv8, for ARMv8-A architecture profile”, the purpose of SCTLR_EL3 is
Provides top level control of the system, including its memory system, at EL3. This register is part of the Other system control registers functional group.
Also, SCTLR_EL3 is accessible only from EL3 mode. EL3 is the highest privileged mode that a low-level firmware, including the secure monitor, works on it.
Next, I checked the current mode via mrs x0, CurrentEL. CurrentEL is a register that holds the current exception level. The result of CurrentEL was 0x04, which means the program works on EL1 mode. EL1 is the mode that an operating system kernel typically described as privileged. I didn’t have the right to access SCTLR_EL3. That’s why an execution stopped.
Ideas to Solve EL3 Issue
I considered 2 solutions:
Use only EL1 registers
Run QEMU in EL3
Firstly, I tried to use only EL1 registers. I replaced arm64_init_cpu with arm64_init_cpu_el1 that is a new function I created. Then I replaced SCTLR_EL3 with SCTLR_EL1 and TLBI ALLE3 with TLBI VMALLE1. It seems to work well but still, there was nothing output.
Secondly, I tried to run QEMU in EL3 that is enabled by -machine flag. QEMU can work on EL2 with -machine virtualization=on and EL3 with -machine secure=on to enable EL3. The following command works well for me.
Hello again! If you recall from my last post, the schedule this week is to fix the issues in northbridge/via and southbridge. However, Coverity is going through a major internal upgrade, and so the issue tracker has been offline all week. Luckily though I was able to fix most of these issues last week, so assuming the upgrade finishes soon I won’t be behind schedule. In the mean time, I decided to try flashing coreboot onto my T500, since the last component I was waiting for arrived last week. Here is a little mini-guide to my (sometimes harrowing) flashing experience.
Supplies
ThinkPad T500
BeagleBone Black
5V 2A power adapter for the BBB
Jumper Wires
Pomona 5252 Test Clip
Atheros AR9462 Wireless Card
Updating the EC
It is generally recommended to update the embedded controller firmware before flashing coreboot, which can only be done during a Lenovo BIOS update. (Unlike Chromebooks, ThinkPads unfortunately do not have open source EC’s.) I was able to find a copy of the latest BIOS on the Lenovo EOL Portal, and attempted to perform an update … which froze and crashed halfway through. Uh oh. This is OK, as long as I don’t restart the computer I can just try flashing it again, right? Wrong! The next time I tried it Windows ran into a fatal error and decided to force a restart for me (gah!). Upon booting it up again, I was met with absolutely nothing, because the screen wouldn’t even turn on. More than a little concerned that I had bricked it, I searched through online forums until I stumbled across the Crisis Recovery tool. Apparently, old ThinkPads have a method to force-update the BIOS from an external USB stick or floppy (if you have one of those lying around). The recovery tool had to be run in Windows XP Service Pack 3 emulation mode, and seemed to format the USB correctly. My ThinkPad wasn’t so impressed, and obstinately refused to recognize the stick. As a last hope, I asked around on IRC what to do, and Nico Huber informed me that the ThinkPad was likely not dead, and that I could just proceed with flashing coreboot anyway. Well, here goes nothing.
Building Coreboot
So we’re going to flash coreboot, but what options do I pick when compiling it? I scoured around the internet to find tutorials for flashing coreboot onto a T500 and other related ThinkPads, but they all recommended different options, sometimes contradictory. Hmmmm. Once again going back to IRC, Angel Pons helped me configure a very minimal build.
General setup ---> [*] Use CMOS for configuration values
---> [*] Allow use of binary-only repository
Mainboard ---> Mainboard vendor ---> Lenovo
---> Mainboard model ---> ThinkPad T500
Devices ---> Display ---> Linear "high-resolution" framebuffer
Now, the T500 is a very special laptop, in that it can run coreboot without any binary blobs at all. However, I decided to enable microcode updates anyway, since they provide important stability improvements (like not crashing). This laptop also comes with an Intel ME which can be completely wiped, but I decided to leave that for later. (Now that I know coreboot works, there will be a follow-up post in several weeks when I do that.)
Disassembly and Flashing
Like most laptops, the flash IC of the T500 is locked from the factory, and requires an initial external flash to install coreboot (afterwards, subsequent flashes can be done internally). This requires disassembling the laptop to access the SOIC-16, which is buried in the bowels of the T500 case and requires a complete tear-down to access. The Libreboot T500 page gives you a feel for the amount of work required to extract the motherboard, which along with the hardware maintenance manual I referred to extensively.
SOIC-16 highlighted in red
With the motherboard extracted from the case, the next step is to attach the Pomona 5252 to the SOIC-16 and jumper it to the BBB, which was all made very easy by this X200 guide. Somewhat blithely following the previous guide, I set up an old ATX PSU to provide 3.3v to the flash chip. However, whenever I connected it to the test clip, it would always power itself off. Strange. Going back to IRC, Nico informed me that this is in fact A VERY BAD AND DANGEROUS THING TO DO. THE INTERNET IS LYING – DO NOT USE AN ATX PSU, YOU COULD FRY YOUR MOTHERBOARD! Oops. After puzzling over how to provide enough power to the chip without the PSU, Patrick Rudolph chimed in that a) the T500 motherboard is basically indestructible (whew!), and b) the flasher itself should be able to provide enough power. Hooking the 3.3v cable into the BBB instead, I tried reading the flash chip.
$ flashrom -p linux_spi:dev=/dev/spidev1.0,spispeed=512
(a bunch of output that I forgot to write down)
It works! Even with a bricked Lenovo BIOS, it is still recommended to keep a backup, so next we read the old factory ROM.
Do this three times with three distinct images, and compare their SHAsums to make sure they are all identical (otherwise the connection might be faulty). If they all match, keep one as a backup.
Note that because I left the ME as-is, it is important to only flash the BIOS region, not the entire chip.
Reassembly and Testing
Sadly, no instant gratification here – I had to reassemble half the laptop before I could test booting it up. However, after doing so and gingerly pressing the power button, I was greeted by the lovely SeaBIOS boot menu. It actually worked! Huzzah! Finishing reassembly, I replaced the factory Intel wireless card with an Atheros AR9462, which can run without any binary firmware. After installing Debian, I now have a laptop running completely free and open source software, all the way from the BIOS up (well, except for the ME, but I’ll fix that later).
T500 running coreboot and Debian 9
For the final icing on the cake, here is a fresh board status report for the T500. Many thanks to everyone who helped me in this process.
During the previous week, I worked on additional filesystem loaders to support parsing Flash Map (FMAP) images and the coreboot file system (CBFS). As of this week, these FS loaders are mostly complete, and can be used to import raw binaries within compiled coreboot ROMs. Support for CBFS file compression (with either LZMA or LZ4) is also implemented; compressed files will be automatically extracted. Here are some screenshots of the new FS loaders:
While these might not be the most useful FS loaders (as FMAP and CBFS are mainly used by coreboot itself), I gained additional familiarity with Ghidra’s plugin APIs for FS loaders. This will be useful, as I will be writing additional FS loaders for this project.
Plans for this week
I’ll continue to make minor changes to the existing FS loaders (various cleanups/etc). I’ll also start to write a FS loader for parsing ROMs with an Intel firmware descriptor (IFD), which shouldn’t be too complicated. After this is completed, I plan on writing a FS loader for UEFI firmware volumes (ideally similar to UEFITool or uefi-firmware-parser). I anticipate that this loader will be more complex, so I’ve reserved additional time to ensure its completion.
Hello everyone. I am Asami and a student for this year’s GSoC project.My project is adding a new mainboard QEMU/AArch64 to make it easier for coreboot developers to support new boards for ARMv8. I’ve already written a small patch to enable building a sample program with libpayload for ARM architecture. Also, I’ve read the implementation of coreboot (main code path) for ARMv7 and QEMU (qemu/hw/arm/vexpress.c). Now, I just created a new CL for my main project and I started to read the implementation of the target machine of AArch64 (qemu/hw/arm/virt.c).
In this article, I’m going to talk about my mistakes when I developed coreboot. I hope it helps for beginners of coreboot development. The target board is QEMU/ARM and the CPU is ARMv7.
“ERROR: Ramstage region _postram_cbfs_cache overlapped by: fallback/payload”
I faced this error when I built coreboot.rom for QEMU/ARM with the coreinfo which is a small informational payload for coreboot. The cause is that the coreinfo doesn’t support ARM architecture and then the payload is compiled as a 32-bit x86.
Make sure that your payload is your target architecture. You need to use other executable files instead of the coreinfo when you want to use architectures other than x86. We provide the libpayload which is a small BSD-licensed static library.
The details of the error is:
$ make
...(omitted)....
W: Written area will abut bottom of target region: any unused space will keep its current contents
CBFS fallback/romstage
CBFS fallback/ramstage
CBFS config
CBFS revision
CBFS fallback/payload
INFO: Performing operation on 'COREBOOT' region...
ERROR: Ramstage region _postram_cbfs_cache overlapped by: fallback/payload
Makefile.inc:1171: recipe for target 'check-ramstage-overlaps' failed
make: *** [check-ramstage-overlaps] Error 1
“ERROR: undefined reference to ‘_ttb'” and “ERROR: undefined reference to ‘_ettb'”
This errors might happen when you build coreboot.rom by `make` at root directory. In this case, You need to add TTB() at your memleyout.ld.
TTB is a translation table base address for MMU. TTBR0 and TTBR1 (TTB registers) hold the start point of TTB. We can put TTB anywhere in memory as long as we store the address to TTBR.
According to the “ARM Architecture Reference Manual ARMv7-A and ARMv7-R edition”, the difference between TTBR0 and TTBR1 is:
(B3-1345) When a PL1&0 stage 1 MMU is enabled, TTBR0 is always used. If TTBR1 is also used then:
– TTBR1 is used for the top part of the input address range
– TTBR0 is used for the bottom part of the input address range
(B4-1724) TTBCR determines which of the Translation Table Base Registers, TTBR0 or TTBR1, defines the base address for a translation table walk required for the stage 1 translation of a memory access from any mode other than Hyp mode.
TTBR0 is basically used for user processes and TTBR1 is used for kernel. However, Linux kernel only uses TTBR0 to reduce the time of context switch. (I just heard that Linux kernel starts to use TTBR1 because of security reasons such as Meltdown and Spectre.)
In coreboot, mmu_init() sets TTBR registers in arch/arm/armv7/mmu.c.
Fails to build a sample program with libpayload
We provide the libpayload which is a small BSD-licensed static library for coreboot and we also have a sample program to know how to use it. However, you might fail to build a sample program when you select the ARM architecture as a target with the following errors:
/usr/bin/ld: cannot represent machine `arm'
The reason why this problem happens is Makefile in the sample directory is old dated. So I created a CL to update current architectures that coreboot supports.
“Payload not loaded” happens when the load address of a payload is wrong. The load address should be placed in the RAM place where anyone can use. You can define the load address via CONFIG_LP_BASE_ADDRESS if you use a libpayload.
Whole operations for building coreboot.rom with a sample payload for QEMU/ARM are:
1. Build a libc and cross compiler environment.
// In coreboot/payloads/libpayload/
$ make distclean // Always needs when switching a mainboard.
$ cp configs/config.emulation-qemu-arm configs/defconfig // Or you can set up it via 'make menuconfig'
$ make defconfig
$ make
$ make install
2. Build a sample payload hello.elf.
// In coreboot/payloads/libpayload/sample
$ make // Make sure that Makefile is updated by https://review.coreboot.org/c/coreboot/+/33287
3. Build coreboot.rom with a sample payload.
// In coreboot/
$ make distclean // Always needs when switching a mainboard.
$ make menuconfig // or make defconfig
Select payload “payloads/libpayload/sample/hello.elf”
$ make
Make sure to do ‘make distclean’ before switching your board target
‘make distclean’ removes build artifacts and config files. The default archtecture in coreboot is x86, so you need to do ‘make distclean’ when you want to use other architectures.
Fails to update an existing CL on Gerrit
Gerrit is a code review tool used in coreboot project. I’m familiar with GitHub and I thought the operations of Gerrit are almost the same with the operations of GitHub, but it weren’t.
On GitHub, developers can create a commit for each update. On the other hand, developers using Gerrit need to amend their commit until it will be merged.
Commands to create a new CL are almost the same with the operations of GitHub: