[GSoC] Ghidra firmware utilities, weeks 6-8

Hello everyone. It’s been a few weeks since I’ve written my last blog post, and during that time I’ve been working on the FS loader for UEFI firmware images. This FS loader aims to implement functionality similar to UEFITool in Ghidra.

As described in the previous blog post, Intel platforms divide the flash chip into several regions, including the BIOS region. On UEFI systems, the BIOS region is used to store UEFI firmware components, which are organized in a hierarchy. This hierarchy begins with UEFI firmware volumes, which consist of FFS (firmware file system) files. In turn, these FFS files can contain multiple sections. Firmware volumes can also be nested within FFS files. This helpful reference by Trammell Hudson as well as this presentation from OpenSecurityTraining have some additional information regarding UEFI firmware volumes.

For example, a UEFI firmware implementation could have a firmware volume specifically for the Driver eXecution Environment (DXE phase). Stored as FFS files, DXE drivers within the firmware volume could consist of a PE32 section to store the actual driver binary, as well as a UI section to store the name of the driver.

So far, I’ve implemented basic firmware volume parsing in the FS loader; I’ve pushed this to the GitHub repository. Currently, this doesn’t handle FFS file or section parsing.

FFS file and section parsing is still a work-in-progress, but here’s a preview:

This is mostly complete, but there are still some nasty bugs related to FFS alignment that I’m working on fixing. My focus for this week is to finish up this FS loader.

Update (2019-07-19)

I have committed support for UEFI FFS file/section parsing in the GitHub repo. Please open an issue report if you encounter any issues with it (such as missing files/sections that UEFITool or other tools parse without issues).

[GSoC] Ghidra firmware utilities, week 5

Hi everyone. As stated in my previous blogpost, I have been working on a FS loader for Intel Flash Descriptor (IFD) images. The IFD is used on Intel x86 platforms to define various regions in the SPI flash. These may include the Intel ME firmware region, BIOS region, Gigabit ethernet firmware region, etc. The IFD also defines read/write permissions for each flash region, and it may also contain various configurable chipset parameters (PCH straps). Additional information about the firmware descriptor can be found in this helpful post by plutomaniac on the Win-Raid forum, as well as these slides from Open Security Training.

For a filesystem loader, the flash regions are exposed as files. FLMAP0 in the descriptor map and the component/region sections are parsed to determine the base and limit addresses for each region; both IFD v1/v2 (since Skylake) are supported. Ghidra supports nested filesystem loaders, so the FMAP and CBFS loaders that I’ve previously written can be used for parsing the BIOS region.

If you encounter any issues with the IFD FS loader, please feel free to submit an issue report in the GitHub repository.

Plans for this week

I have started working on a filesystem loader for UEFI firmware volumes. In conjunction with the IFD loader, this will allow UEFI firmware images to be imported for analysis in Ghidra (behaving somewhat similar to the excellent UEFITool).

[GSoC] Ghidra firmware utilities, week 4

During the previous week, I worked on additional filesystem loaders to support parsing Flash Map (FMAP) images and the coreboot file system (CBFS). As of this week, these FS loaders are mostly complete, and can be used to import raw binaries within compiled coreboot ROMs. Support for CBFS file compression (with either LZMA or LZ4) is also implemented; compressed files will be automatically extracted. Here are some screenshots of the new FS loaders:

While these might not be the most useful FS loaders (as FMAP and CBFS are mainly used by coreboot itself), I gained additional familiarity with Ghidra’s plugin APIs for FS loaders. This will be useful, as I will be writing additional FS loaders for this project.

Plans for this week

I’ll continue to make minor changes to the existing FS loaders (various cleanups/etc). I’ll also start to write a FS loader for parsing ROMs with an Intel firmware descriptor (IFD), which shouldn’t be too complicated. After this is completed, I plan on writing a FS loader for UEFI firmware volumes (ideally similar to UEFITool or uefi-firmware-parser). I anticipate that this loader will be more complex, so I’ve reserved additional time to ensure its completion.

[GSoC] Ghidra firmware utilities, week 3

Last week, I finalized my work on the PCI option ROM loader, which was the first part described in my initial proposal for this project. This consists of a filesystem loader for hybrid/UEFI option ROMs and a binary loader for x86 option ROMs.

Background information on PCI option ROMs

Option ROMs may contain more than one executable image; for example, a graphics card may have a legacy x86 option ROM for VGA BIOS support as well as a UEFI option ROM to support the UEFI Graphics Output Protocol. x86 option ROMs are raw 16-bit binaries. The entry point is stored as a short JMP instruction in the option ROM header; the BIOS will execute this instruction to jump to the entry point. In contrast, UEFI images contain an UEFI driver, which is a PE32+ binary. This binary can be (and frequently is) compressed with the EFI compression algorithm, which is a combination of Huffman encoding and the LZ77 algorithm.

Filesystem loader

The filesystem loader allows hybrid/UEFI option ROMs to be imported. It also transparently handles the extraction of compressed UEFI executables.

Initially, I attempted to write a Java implementation of the EFI Compression Algorithm for use in the FS loader, but ran into several issues when handling the decompression of certain blocks. I eventually decided to reuse the existing C decompression implementation in EDK2, and wrote a Java Native Interface (JNI) wrapper to call the functions in the C library.

With the FS loader, UEFI drivers in option ROMs can be imported for analysis with Ghidra’s native PE32+ loader.

x86 option ROM binary loader

This loader allows x86 option ROMs to be imported for analysis. Various PCI structures are automatically defined, and the entry function is resolved by decoding the JMP instruction in the option ROM header.

PCI option ROM header data type
PCI data structure data type
Disassembled entry point

Plans for this week

I’ve started to work on filesystem loader for FMAP/CBFS (used by coreboot firmware images). After that, I plan on working on additional FS loaders for Intel flash images (IFD parsing) and UEFI firmware volumes.

As usual, the source code is available in my GitHub repository. Installation and usage instructions are included in the README; feel free to open an issue report if anything goes awry.

[GSoC] Ghidra firmware utilities, weeks 1-2

Hi everyone. I’m Alex James (theracermaster on IRC) and I’m working on developing modules for Ghidra to assist with firmware reverse engineering as a part of GSoC 2019. Martin Roth and Raul Rangel are my mentors for this project; I would like to thank them for their support thus far.

Ghidra is an open-source software reverse engineering suite developed by the NSA, offering similar functionality to existing tools such as IDA Pro. My GSoC project aims to augment its functionality for firmware RE. This project will consist of three parts: a loader for PCI option ROMs, a loader for firmware images, and various scripts to assist with UEFI binary reverse engineering (importing common types, GUIDs, etc).

The source code for this project is available here.

Week 1

During my first week, I started implementing the filesystem loader for PCI option ROMs. This allows option ROMs (and their enclosed images) to be loaded into Ghidra for analysis. So far, option ROMs containing uncompressed UEFI binaries can be successfully loaded as PE32+ executables in Ghidra. The loader also calculates the entry point address for legacy x86 option ROMs.

Plans for this week

So far this week, I’ve worked on writing a simple JNI wrapper for the reference C implementation of the EFI decompressor from EDK2, and have used this to add support for compressed EFI images to the option ROM FS loader. Additionally, I plan on making further improvements to the option ROM loader for legacy option ROMs; while the entry point address is properly calculated, they still have to be manually imported as a raw binary.

GSoC 2010, TianoCore as a payload :(

It’s been an interesting summer.  It didn’t at all turn out how I expected, but it is what it is.  TianoCore as a software project turned out to be massively more complex than I anticipated when I submitted my proposal, and the level of knowledge required was quite a bit deeper than I expected… it’s one of those cases where I didn’t know what I didn’t know.  I’ll have to talk to a couple of my professors about that, to see if there’s some elective class that explains the things I’ve missed.

Sorry, that’s vague, let me give an example.  Coreboot does it’s thing, hardware initialization, then passes control to the payload.  This seems to be the equivalent of the dreaded “goto”, which is actually pretty cool.  Coreboot doesn’t care what happens next.  So hypothetically, I have some code, anything, I want to use it as a payload.  I compile it, then what?  Well, it depends (as you all know), how was it compiled?  Is it an elf?  PE32?  Something else?  Where exactly is the entry point to this binary blob?  (That’s a rhetorical question, please don’t answer it in the comments.)  You would have thought at some point in one of my classes executable formats would have come up, just as an example.  Or calling conventions.  Or hundreds of other little things that I’d never seen or heard of before I suddenly realized that I needed to understand them.  So that’s what I ended up spending much of the summer on.  Write code… stop and realize what I’m doing doesn’t make sense/won’t work/is the wrong approach, then start over.

One of the things that drew me to coreboot as a project was that as a computer engineering student, I took a lot of classes focusing on the physical side of computing, starting with physics and circuits classes, moving up through logic gates to chip design.  On the other side, programming started at a pretty high level with c++, then worked down, till I got to the computer architecture and operating system classes, and assembly language (not x86, unfortunately).   I would expect that as a “computer engineer” I should understand the whole stack, that the physical, EE stuff and the CS stuff would meet in the middle.  But they haven’t (and they won’t: I’m about to graduate, and there aren’t any crucial classes left to take).  I knew this going into GSoC, and coreboot seemed like the perfect project to fill the gap (and give something back to the open source world that I’ve gotten so much out of).  Well, like I said, the gap turned out to be a lot bigger than I expected.  (To abuse the metaphor a little more, anyone remember when Evel Knievel tried to jump the Snake River Canyon?  That’s kind of how I feel about my summer of code.) Continue reading GSoC 2010, TianoCore as a payload 🙁

TianoCore payload, mid-summer status update

Well, it’s hard to believe that the GSoC midterm evaluations are here already.  I guess it’s true what they say, time flies when you’re sitting in a basement in front of a computer all day. If I were to evaluate myself, I’d give myself a barely passing grade based on results –I’m nowhere near where I expected to be this summer.  I think I mentioned before, partly this is due to TianoCore being massively more complicated than I expected when I wrote my project proposal –seriously, I ran cscope on the edk2 branch of TianoCore, and it reported over 160,000 files… the resulting index itself took up half a gig– and partly it’s due to there being so much about sophisticated C usage (and makefiles, and preprocessor directives, and macros, and calling conventions…) that I didn’t understand going into this.  (But, having said that, part of the reason I applied to coreboot was I knew there were a lot of important details that had been glossed over in my classes, things that I needed to know and would be forced to learn if I worked on a close-to-the-hardware project.)

Moving on to the status of my project. In my proposal I assumed a couple weeks to improve the state of TianoCore as a payload, a month or so to write a CBFS driver for TianoCore, and a month or so to write the VGA driver.  It turns out that the state of TianoCore as a payload was not very good, and so that is what I am working on.

Let me try to briefly explain my current approach.  UEFI itself does not initialize the hardware.  Before the UEFI firmware can be run, the system (from a cold boot) has to go through the Platform Initialization stage.  The PI stage is itself made up of the Security stage (the initial booting, and some optional checksums to make sure the image hasn’t been tampered with), the Pre-EFI Initiialization (PEI , where the memory and chipsets are woken up and initialized) and the Driver Execution Environment (DXE, which loads additional drivers, then starts the UEFI).  Coreboot already does most of this work in its own way, so it seems the best strategy would be for a coreboot payload to impersonate one of these stages (each stage is it’s own binary in the firmware volume), provide all the functions and data stuctures that the following stage expects, then jumps to it.  Inserting the payload immediately after the Security stage seems redundant and dangerous (the PEI stage would end up trying to reinitialize hardware that’s already  in use), and after the DXE stage seems too late, because then the payload would have to know how to load DXE-stage binary drivers.  So I’m working on implementing a pseudo-PEI stage, that translates the coreboot provided data structures into the form that the DXE stage expects, and writing a couple dozen functions that the DXE stage expects to be available.  This way we can have a minimal-sized payload that can leverage a separate DXE and UEFI stage compiled directly from the TianoCore codebase (or borrowed from a manufacturer supplied image, like a traditional option ROM).  I think I can have this working by the end of summer. Thoughts?

TianoCore payload status

I’ve been holding off on writiing a post until I had some significant chunk of code I could point to to show that I’m making progress.  Well, I am making progress, but there’s no “significant amount” of code yet.  I’m not worried about the slow start, because what I’m doing makes sense and I am writing code, but there are still so many things I run into every day where I have to stop and look something up to try to figure out why it was done a particular way.  Almost all of the confusion comes from the TianoCore codebase, which is 1) huge, and 2) written in a consistent but unfamiliar style.  Like this line,

 IN OUT   EFI_PEI_PPI_DESCRIPTOR      **PpiDescriptor OPTIONAL,

What’s with the IN, OUT and OPTIONAL?  I haven’t been able to find them defined anywhere, I’m assuming they’re special comments that Visual Studio knows about, but I haven’t found any reference to them. In comparison, reading the coreboot code is fun and easy.  It’s just straightforward c.

First successful Nvidia MCP6x/MCP7x SPI access

Since a few hours, my Nvidia MCP61/MCP65/MCP67/MCP73/MCP78S/MCP79 SPI driver is tested and it works well. Only probing for a flash chip was tested, but still… this means my SPI bitbanging code is correct, and Michael Karcher’s reverse engineered docs are correct, and my implementation of the Nvidia GPIO interface used for bitbanging SPI is correct as well.

This is big news because with this patch flashrom finally has 100% support for all x86 chipsets we saw in the last ten years.

Huge thanks go to Michael Karcher for reverse engineering the interface and writing up cleanroom documentation which I could use for implementing the interface.
Huge thanks to Johannes Sjolund for testing my patch on his hardware although it was completely untested before.

Get the patch here: http://patchwork.coreboot.org/patch/1520/ (click on the “patch” link on that page to get a download).
Continue reading First successful Nvidia MCP6x/MCP7x SPI access