[GSoC] coreboot for ARM64 Qemu – Week #4 and #5

From this week I started dealing with the core aspects of aarch64 design. I continued with the process of building the armv8, along with handling the required patching-up, interfacing, hook-ups. In my last post, I had talked about the toolchain building error (in binutils) for arm64 which I was facing on OSX. I had to remove the –enable-gold flag from the binutils. After making this small update, the build_BINUTILS looked like this, and I was able to get the toolchain working.

build_BINUTILS() {
 if [ $TARGETARCH == "x86_64-elf" ]; then
 CC="$CC" ../binutils-${BINUTILS_VERSION}/configure --prefix=$TARGETDIR \
 --target=${TARGETARCH} --enable-targets=${TARGETARCH}${ADDITIONALTARGET} \
 --disable-werror --disable-nls --enable-lto \
 --enable-plugins --enable-multilibs CFLAGS="$HOSTCFLAGS" || touch .failed
 $MAKE $JOBS || touch .failed
 $MAKE install DESTDIR=$DESTDIR || touch .failed

The work had just began after fixing the toolchain. On attempting to building, the faced error I got was :

toolchain.inc:137: *** building bootblock without the required toolchain. Stop.

This was due to certain wrongly configured CONFIG options in the Kconfig. After this stage the initial bring-up of arm64 looked stable. Moving forward, I was met with an error in src/arch/arm64/c_entry.c

src/arch/arm64/c_entry.c: In function ‘arm64_init': src/arch/arm64/c_entry.c:52:2: error: implicit declaration of function ‘cpu_set_bsp’ [-Werror=implicit-function-declaration] cpu_set_bsp();

The inclusion of necessary files and structures were correct, and I kept getting this error. Furquan ultimately pointed to change 257370  following which, I could get past this. After this, I had to solve another BSD/OSX issue about date in genbuild_h.sh to get my build progressing.

Subsequently, some architectural decisions had to be made for the armv8. In the initial version, I had been banking on cbfs_media based structure in media.c, for creating functions for read, write and map. But the older formulation (in cbfs_core.c and cbfs_core.h) is changed now. In order to keep up, and to maintain uniformity, we decided to handle this as it is handled in armv7, i.e by creating a mapping to the qemu memory mapped space. Another point of discussion for stage loading. It is being brought up similar to armv7 for now. This might change in the future. Also the organisation for UART was finalised. plo11.c is included, as in  src/drivers/uart/Makefile.inc. by setting the DRIVERS_UART_PL011 in armv8 Kconfig.

Next hitch was dealing with SMP. In my proposal, I had suggested that incorporating smp into the emulation could be a long term goal. But since smp is a part of core arm64 logic, this cannot be completely ignored at this stage. I am met with this

coreboot/src/arch/arm64/stage_entry.S:94: undefined reference to `smp_processor_id’
build/arch/arm64/c_entry.romstage.o: In function `seed_stack':


A cpu file which adds smp_processor_id() is needed at this moment, which I am currently working on. Next week’s plan is to get past these (and meet new unforeseen issues 😛 ) and boot off on qemu.




[GSoC] EC/H8S firmware week #3|4

In the last 2 weeks I managed to flash the H8S on the T40 using the OEM Renesas Flash Tool including their flash application. Flashing works in 2 steps, first upload a flash application into the H8S. Second this flash application will receive the firmware (via serial) and write it into the flash. Thanks to Renesas this application is available in source code. I would like to write an own flasher later.

But I wasn’t able to create a proper application yet. I could write the led programm in assembly, but having a working c compile is needed anyway.

I built a toolchain with gcc 4.9.2. The toolchain buildscript is very simple and can be found on github. I stopped my building efforts for now (building one based on gcc 4.4.6). There’s also a debian package for h8300 (based on gcc 3.4.6) which may be a good alternative. Before continuing in building toolchains and my led application, I’m reading me into linkerscipts and take a look how the compiler is working (e.g. what must a crt0 do?).

At the moment I know how the application should be compiled, where the reset vectors are and where the entrypoint. But putting these things together into a binary image is my task now.

The dev board I mentioned in my last post was stuck by the german post for the last 2 weeks, because there were on strike. The board is now in the custom office and I’ll collect in the next days, which will takes severals hours in Berlin.

[GSoC] Thinking about Logging

Hey coreboot community!

It’s been just under a month of working on coreboot. I’ve not been keeping up with my blog posts. So here is the first of three particularly long posts detailing what I’ve been up to, what decisions I’ve been making, and what problems I’ve faced. This post is a background on logging, the problems it has as of now, and how not to fix them.

Tomorrow I will be posting a blog post giving some background on a change I made to solve these problems in a fair way. I will also be posting a blog post about what’s up next for me.

So how does logging work?

For most programs, logging works simply through slinging text at some interface. A developer wanting to leak some information uses a function to print some information. You can then look at stdout/stderr, or an output file, or syslog, or whatever, and get that leaked run-time information.

Where coreboot differs from most programs is that coreboot is pre-OS. The abstractions that our operating systems provide aren’t available for coreboot, and not just the higher abstractions like syslog. Coreboot is self contained. Where a Hello World using stdio might be something like:

fprintf(stdout, "Hello World\n")

coreboot doesn’t have that privilege. In theory, we could provide all the same familiar functions, but why? We really don’t need them. This information ultimately ends up going out on some UART, so why bother with needless abstractions? We’re constrained to fitting coreboot and a payload and a video option rom and a TXE and so on into 8MB or less… It’s important that we are prudent in what we make available.

So what does the coreboot project do? We have printk(), which you might associate with the linux kernel (‘print kernel’). In fact, this is exactly the case. If you go into src/console/printk.c, you’ll see a humorous message in the copyright notice:

* blatantly copied from linux/kernel/printk.c
* Copyright (C) 1991, 1992 Linus Torvalds

Why? Recall that coreboot used to be linuxbios, and linuxbios was largely an effort to bootstrap linux. For a really awesome background on coreboot and its origins, and particularly for an interesting look at just how much coreboot has changed over time, you might be interested in watching the first few minutes of this video

But just because we’re inspired by linux doesn’t mean we’re inside linux. In spite of the copyright, coreboot’s printk() does not resemble linux at all anymore besides in name. We’re fully responsible for this function, so how does it work?

Let’s follow a debugging print statement, for example, one at src/lib/cbfs.c:102

DEBUG("Checking offset %zx\n", offset);

Already we have something interesting. What’s 'DEBUG()'? I thought it was printk()?

Go to the top of the file and you’ll see some preprocessor

#define DEBUG(x...) printk(BIOS_SPEW, "CBFS: " x)
#define DEBUG(x...)

Oh, okay. So depending on DEBUG_CBFS, the above DEBUG() call either means an empty statement, or

printk(BIOS_SPEW, "CBFS: " "Checking offset %zx\n", offset);

Firstly, what’s a 'BIOS_SPEW'? Just a constant, as defined in src/include/console/loglevels.h. You’ll notice again that these loglevels mirror the linux loglevels exactly, replacing 'KERNEL_' with 'BIOS_'.

What’s the point of it? If you want to control the verbosity of your logs, you set CONFIG_DEFAULT_CONSOLE_LOGLEVEL to the loglevel you want and you’ll get more or less verbosity as a result. If your configured log level is greater than the loglevel the printk statement is tagged with, it prints.

Why is this file doing this alias of printk? Is it done elsewhere? Why is there a separate CONFIG_DEBUG_CBFS variable? I’ll get to that in a second. This is actually some of the problem I’m setting out to fix. For now, let’s continue by assuming that our configuration is set such that it expands out to printk. Where, then, is printk defined?

Go into src/include/console/console.h and you’ll see what printk() is about:

static inline void printk(int LEVEL, const char *fmt, ...) {}


#define printk(LEVEL, fmt, args...) \
do { do_printk(LEVEL, fmt, ##args); } while(0)

Hmm, why the wrapper with preprocessor? If we are producing a final version of coreboot to go into a production setting, there’s an advantage to muting the logging information at compile time, because at aggregate, debugging statements do have a meaningful performance penalty, and we can increase the speed of our boot by making them empty statements.

So it’s really do_printk. It comes from src/console/printk.c. I won’t post the full contents, but I’ll explain briefly what it does:

  1. checks LEVEL with the configured log level, muting statements if they’re more verbose than what is requested
  2. when configured as such, in pre-ram settings, mute statements not originating from the bootstrap processor.
  3. Temporarily stop function tracing (a configuration option to watch functions and their callsites)
  4. temporarily lock the console resources to the a single thread, ensuring writes do not overlap.
  5. deal with varargs loading into its type via va_start().
    (notice the 'args...' in the signature? That allows printk to take
    a flexible amount of variables. It places n arguments onto the stack
    and so that they can be popped off later. Functions interact with
    variable arguments via va_arg().)
  6. calls vtxprintf with wrap_putchar, the format string, the variable argument list, and NULL. This interpolates the varargs into the format string and pushes it to the outputs.
  7. clean up variable arguments with va_end()
  8. flush the tx (not all output methods require it, but it’s harmless in those cases)
  9. Unlock the console resources
  10. re-enable tracing
  11. return the number of bytes of the output.

One cool thing to take away from this is how we abstract the factor of where console output is going from do_printk. wrap_putchar is another (aptly named) wrapper for do_putchar. do_putchar calls src/console/console.c‘s console_tx_byte(), which contains a bunch of functions like '__uart_tx_byte()' or '__usb_tx_byte()' or '__cbmemc_tx_byte()' which might be defined or left empty via preprocessor.

This allows us to not have to consider their differences or implementation in do_printk(), just in configuration and in their own implementations of init(), tx_byte(), and tx_flush()

The entirety of that is interesting, which is why I wanted to talk about it, but it’s not what I’m touching. Now we know how logging works, we know where we can influence it in do_printk(), and we’ve seen that it’s influenced elsewhere in preprocessor, too.

Let’s go back to the proprocessor 'DEBUG()' in src/lib/cbfs.c

#define DEBUG(x...) printk(BIOS_SPEW, "CBFS: " x)
#define DEBUG(x...)

If loglevels control verbosity, why are there additional CONFIG_DEBUG_CBFS variables? The gist of it is that mapping all statements to a level of 1-8 really isn’t granular enough for coreboot anymore. Sometimes you want to see most everything, but you’re not working on changes for CBFS and don’t need to see it dump its tables. Other times you’re working on a bug in CBFS and need all the information you can get, so you set the variable and get more output.

CONFIG_DEBUG_CBFS is not the only variable like this, nor should it be. There’s many different parts of coreboot, some are relevant all the time in your console output, some are rarely relevant, some are relevant when you need them to be, and if we can get more granularity without it being at the expense of performance or of making a complex mess, we should do it.

But there’s more than just CBFS and SMM and the things already defined with CONFIG_DEBUG_* variables. What that means is this will require quite a bit of grokking the source of coreboot, and classifying things. (Can we do this in a programmatic way?)

The first way I tackled this problem was to think about loglevels. What did they mean? The descriptions we had in the header were the same as those in the linux kernel:

#define KERN_EMERG "<0>" /* system is unusable*/
#define KERN_ALERT "<1>" /* action must be taken immediately*/
#define KERN_CRIT "<2>" /* critical conditions*/
#define KERN_ERR "<3>" /* error conditions*/
#define KERN_WARNING "<4>" /* warning conditions*/
#define KERN_NOTICE "<5>" /* normal but significant condition*/
#define KERN_INFO "<6>" /* informational*/
#define KERN_DEBUG "<7>" /* debug-level messages*/

I think this is poorly defined, particularly for coreboot. What’s the difference between a “normal but significant condition” and an “informational” message? What’s critical, what’s an error? In the context of the linux kernel these are clearly distinct but coreboot is do-or-die in a lot of cases. It stands to reason that a kernel can take many more steps than coreboot to recover.

I made a contribution to the header to try to define these in a more clear way. I didn’t get a whole lot of feedback, but it made it in. If you’re reading this and you didn’t see it, please look it over. If you think anything is wrong here, let’s fix it. I felt pretty confident that this mapped well to their current usage within coreboot.


But that’s really none of the battle. Shockingly, the granularity problem still exists even though I put some comments in a file!

My first thought was, “Well let’s just make a bunch of subsystems, each with their own loglevel 1-8. We can have a CBFS subsystem and an SMM subsystem and a POST code subsystem and so on.”

Let’s explore this idea now. I didn’t. I jumped in and implemented it and before thinking and realized some problems with this idea, quickly putting me back to square one.

When would a POST code qualify as BIOS_SPEW and when would it be BIOS_CRIT? It’s a laughable question. Some of these things are more fitting for toggle rather than a varying level of verbosity, and the current system for doing so is genuinely the best way to make decisions about printing such information… (Though, that doesn’t mean it’s implemented in the best way.)

Meanwhile CBFS probably does have enough of a variation that we can give it its own loglevels. But does it need to be mapped over 8 values? Maybe 4 would be more fitting.

So we want POST mapped to two values. We want CBFS mapped to 4. We want X mapped to N. How can we make a framework for this without creating a mess of Kconfig? How can we make sure that developers know what values they can use?

How are we going to do this in such a way that we don’t mess with preexisting systems? People can set their debug level in CMOS, how is this going to play into that system, where space is limited? We can’t afford to put unlimited configuration values into there. How are we going to do this in a way that extends printk, rather than re-implements it, without breaking everything that exists now? Remember above when I wondered about if other places do preprocessor definitions of printk like 'DEBUG()'? The answer is yes. Nearly everywhere.

It’s a problem of comparing of compile-time configuration against runtime configuration, of weighing preprocessor against implementing things in code, and of weighing simplicity against ease.

So in summation:

  • We need more granularity in logging. Right now we’re using putting the preprocessor to work to accomplish that, and while I first thought this was a bad practice, I came to feel feel that it actually isn’t so bad.
  • Outside of the additional variables to accomplish extra verbosity, there is little organization of printk statements besides by their log
    level. If log messages were properly tagged, logs would be much cleaner and easier to parse. This needs to be accomplished as well.
  • Logging works through a interface called do_printk(), which is separated out from the lower levels of printing, giving us a lot of freedom to work.
  • We must stay conscious of problems that we can introduce through creating too much new configuration.

Check in tomorrow for how I chose to solve this.

[GSoC] EC/H8S firmware week #2

The last week was a little bit depressive. I did the some resoldering. Pin P90 wasn’t connected to 3.3V which is needed to enter the flash boot mode. It was soldered  to the VCC of the Serial level shifter MAX3243. After searching some minutes with the Multimeter for a better power source, I decieded to use 3.3V near the H8S. It’s now a very long cable across the board.

Now let’s see, how good this works? Nothing :(. Recheck with a voltmeter and found another problem with P91 (/SUS_STAT). When connecting SUS_STAT with an 1k resistor to 3.3V the voltmeter shows 0.04V. This means it’s driven by something else to 0V. My hope was that the chipset isn’t driving this until it’s powered. But sadly it is driving it to 0V. What’s SUS_STAT? SUS_STAT can be used as LPCPD (LPC power down) and is used to notify devices to enter a low power state soon. Suspend Status is active low, which means all device should be in low power mode.
What should I do now? I need 3.3V on that line.

There are multiple solution:

  • Remove 1k and burn it to death. But likely this could kill the chipset or
    a least this certain pin or multiple pins
  • Cut the pin
  • Bend the pin upwards while desoldering
  • Desolder the whole chip and bend afterwards, resolder
  • Replace the chip with a socket (expensive and rare)

This decision is not easy to take, especially because I never done most
of these things. This got me stuck for a while until Peter helped me out,
he bend a single pin upwards. Thanks!

The next week milestone is still flashing the EC, the same goal since the first week. So the time schedule will be a little big chaotic. Maybe I can hurry up and reach another weekly goal fast than a week.

Because I was stuck on that a little bit, I took another look on ebay and bought a development board with a H8S/2633. 2633 is a little bit newer than the 2100 series
which is used in Lenovo laptops. The board should arrive in one week, but atm it’s in german customs. Such development boards are hard to get for a “good” price. Brand new boards start with several hundred euros or dollars. E.g. the debugger E10 (USB device) cost around 1000 Euro, it’s only a stupid USB device. I already bought on ebay an E8, previous generation debugger, but it can not debug the chip, only flash them with the Renesas software/IDE.

Beside my project I’ve done some other work on coreboot. I helped Holger Levsen on creating a reproducible build job for coreboot on reproducible.debian.net. More info about reproducible builds are on their wiki page.  To improve reproducibility I created 2 patches #10448 #10449. They cleaned up reproducible bugs in coreboot and without building Payloads, most targets are now reproducible. Great thanks to Holger Levsen for his work on that!

[GSoC] coreboot for ARM64 Qemu – Week 1

To begin with the aim of introducing coreboot for arm64 qemu, the first task I had to accomplish was to set up a qemu aarch64 environment to work on. In this post, I will talk about building qemu and then booting a kernel that allows us to begin experimentation with this architecture.

To begin building qemu, we need a few packages:

pkg-config, libfdt-dev

Next,  we need a qemu version which support aarch64, so I installed qemu 2.3.0.  Here you can also do :

sudo apt-get install build-dep qemu

Since I was building it on a mac, I was required to do a brew install qemu (again, v2.3.0). For mac, it is recommended to use actual gcc rather than the existing ‘gcc’ which is symbolic-linked to llvm-gcc (x86_64-apple-darwin13.4.0/4.9.2/). Going with the innate gcc kept giving me pains, so I downloaded gcc 4.9.2, created a manual link and used it for my build. Moving on, we now need some of the source code;

git clone git://git.qemu.org/qemu.git qemu.git
cd qemu.git
./configure --target-list=aarch64-softmmu

The last command will usually return an error, saying DTC (libfdt) not present. The problem is that qemu tries to search for dtc binaries in qemu/dtc. Even if you install dtc using sudo apt-get install device-tree-compiler, we keep getting this error. So probably you need to have the binaries in qemu/dtc. Doing this in the repo will fix it.

git submodule update --init dtc

Then, run the ./configure command again. The output can be found here. We then have to run a make command,


This gives the following ouput. After this successful build, we have an executable ./qemu-system-aarch64 in qemu.git/aarch64-softmmu. I then used a prebuilt kernel image that has a combined initial RAM disk (initrd) and a basic root file-system. It can be downloaded from here.

Then finally, we run this kernel in our generated aarch64 system to find the linux boot sequence and eventually a log in prompt.

qemu-system-aarch64 -machine virt -cpu cortex-a57 -machine type=virt -nographic -smp 1 -m 2048 -kernel ~/Downloads/aarch64-linux-3.15rc2-buildroot.img  --append "console=ttyAMA0"

The boot sequence results as

Welcome to Buildroot
buildroot login: root
# ls
# uname -a 
Linux buildroot 3.15.0-rc2ajb-00069-g1aae31c #39 SMP Thu Apr 24 11:48:57 BST 2014 aarch64 GNU/Linux

This gives us an aarch64 qemu environment with linux on which we can begin building coreboot.

With the development platform ready, I now begin my actual work on building coreboot for qemu arm64. For this week, I look at the ( now obsolete ) foundation-armv8 patchset and begin my development. The first task would be to create an appropriate media structure / functions that I would use.


[GSoC] EC/H8S firmware week #1

The first task of my project is a working development board. A development board means that I have serial communication and I can flash new firmwares the chip and whole mainboard isn’t booting. The chip is a H8S 16-bit microcontroller with 64kb to 128kb EEPROM and is available in different packages. BGA and TQFP. BGA means the pins are under the chip, TQFP has pins on the side. TQFP is nice to hack, but most modern Thinkpads use the BGAs. But a T40 or T42 use a TQFP package. A friend donated his old T42 to me! Thanks a lot! Now with a hackable T42 I can start to create a development board out of the T42 mainboard. Like most other microcontroller this chip has a programmable bootloader in a ROM (called rom loader). The bootloader can boot to different states, configurable via 5 pins (MD0 MD1 P90 P91 P92).
P90 to P92 are only read when MD0 and MD1 are in a special bootstate.
After reading the documentation I found that the pins must match the following volatage levels to select the flash boot mode:

MD0-MD1 = 0V, P90-P92 = 3.3V.

Besides these configuration pins we need some additional wires to the following pins:
/RES – reset active low
UART RX – serial communication

Now it gets interesting. The MCU (microcontroller unit) can use a pin for different purposes depending on the PCB designer. Those pins called multifunction pins. Hopefully we don’t get blocked by unaccessible pins. After reading more documentation and using a Multimeter on the board I found out that /RES, RX, TX, MD1 require soldering, but are easy accessible. MD0 is already in a good state.
P90 is connected via a resistor to ground, but we need it to 3.3V.
Let’s find the resistor to solder 3.3V to it… Mhh. tricky! 3h later I found it on the
board hidden under the PMH4 (2nd EC/GPIO expander). Very uncommon.

P91 is named /SUS. Suspended active low, but can be driven by multiple controllers (chipset + h8s).
Because we want to boot linux on the main cpu later in the project we should not kill the chipset. I added a pin connector to this pin.

And the last pin P92 was connected to the SuperIO UART’s level shifter (MAX3242). I had to desolder the chip because P92 was driven by the level shifter.

Near the EC are 2 testpoints which are connected to an I2C bus. I soldered these too, because an I2C could be useful.

1 P91
3 md1
5 RX
6 TX
1 patch cable with a 3.3V + 1k Resistor (for P91).

So far so good. But somehow it doesn’t work. Some pins doesn’t have the right level. P92 doesnt have 3.3V. Why not?
P92 is pulled up via a resistor to VCC of the TTL shifter. The VCC isn’t powered. I need to resolder it to another 3.3V pin somewhere and take another look
on the other levels too.

PS. Some work was already done before GSoC started. I posted the first part of soldering on my blog

coreboot GSOC 2015 projects

Welcome the coreboot GSOC 2015 students!

Łukasz Dmitrowski – End user flash tool

lynxislazus – H8S EC firmware

Naman – Qemu Support for ARM64 Architecture

n1cky –  Improve/automate reporting and testing


A special thanks to this years mentors: adurbin, furquan, martinr, carldani, stepan, pgeorgi, rminnich, ruik, stefanct, and carebear!

coreboot GSoC 2015

coreboot has been accepted as a mentoring organization for GSoC 2015! We are accepting student applications for coreboot, flashrom, and SerialICE projects.

Students:  Welcome! Please review the coreboot GSoC page and Project Ideas page. Student applications formally open on 16 March at 19:00 UTC.

Mentors:  Please signup in Melange and request to join the coreboot project. You must signup for 2015, even if you already have a Melange account. You may also find the GSoC Mentor Guide helpful.

Community:  Please recruit and welcome prospective students to the organization. You may provide ideas on the Projects Page.

GSoC [infrastructure] : Along the way, something went terribly wrong

I started working with AMD platforms’ infrastructure with high hopes of being able  to better manage the CBMEM setup. While I have a selection of family14 boards to work with, things did not continue so well:

First off, tree had literally thousands of lines of copy-pasted or misplaced AGESA interface code remaining in the tree. A lot of that should have been caught in the reviews, but it appears a few years back the attitude was that if coreboot project was lucky enough to get some patches from an industry partner, the code must be good (as the development was paid for!) and just got rubber-stamped and committed.

Second, the agreement I have for chipset documentation is not open-source friendly. It contains a clause saying all documentation behind the site login is to be used for internal evaluation only. I was well aware of this at the beginning of GSoC and at that time I expected my mentor organisation would be able to get me in contact with right people at AMD to get this fixed. But that never happened. I am also concerned of the little amount of feedback received as essentially nobody in community has fam16kb to test.

So currently I am balancing which parts of my work on AGESA I should and can publish and what I cannot. Furthermore, first evidences that vendor has decided to withdraw from releasing  AGESA sources have appeared for review. In practice there has been zero communication with the coreboot community on this so I anticipate the mistakes that were done with FSP binary blobs will get repeated.

Needless to say the impact this has had on my motivation to further work on AGESA as my efforts are likely to go wasted with any new boards using blobs. I guess this leaves pleanty of opportunities for future positive surprises once we have things like complete timestamps, CBMEM and USBDEBUG consoles and generally any working debug output from AGESA implemented. I attempted to initiate communication around these topics already in fall 2013 without success and it is sad to see the communications between different parties interested in overall tree maintenance have not improved at all since then.

GSoC 2014 [flashrom] Support for Intel Bay Trail, Rangeley/Avoton and Wildcat Point

While we were busy updating our AMD driver code to accommodate the new SPI controller found in Kabini and Temash, Intel has also changed their SPI interface(s) in a way that required quite some effort to support it in flashrom. A pending patch set is the result of the work of a number of parties and I will shortly explain some details below.

All started with a patch for ChromiumOS’s flashrom fork in fall 2013 that introduced support for Intel’s Bay Trail SoCs which are used in a number of currently shipping or announced Chromebooks. Bay Trail is part of the Silvermont architecture also in other SoCs intended for different use cases like mobile phones (Merrifield and Moorefield) or special-purpose servers (Avoton and Rangeley). At least for the latter two the SPI interface is equivalent to Bay Trail’s. This was handy for Sage when they were developing their support package for Intel’s Mohon Peak (Rangeley reference board) which was upstreamed to the coreboot repository shortly before this blog post was written. They ported the patch to vanilla flashrom, added the necessary PCI IDs and submitted the result to our mailing list at the end of May.

Because we, the flashrom maintainers, are very picky, the code could not be incorporated as is. I took the patch and completely reworked and refactored it so that more code could be shared and we are hopefully better prepared for future variations of similar changes. Additionally, I have also backported the Intel Wildcat Point support that ChromiumOS got already in May.

The major part of my contribution was not simply integrating foreign code, but refactoring and refining it where possible as well as verifying it against datasheets. While digging these numerous datasheets and SPI programming guides I have also fixed the problem of hardware sequencing not working on Lynx Point (and Wildcat Point) as it was reported in March when I had no time to correct it.

All of this is not committed to the main repository yet, but will be soon. It is mostly untested so far and I would very welcome any testers with the respective hardware.