Update: coreboot conference in Europe, October 2015

UPDATE: Invitations published, venue is decided, few bed+breakfast rooms at the venue are still available

TL;DR: coreboot conference Oct 9-11, more info at http://coreboot.org/Coreboot_conference_Bonn_2015

 

Dear coreboot developers, users and interested parties,

we are currently trying to organize a coreboot conference and developer meeting in October 2015 in Germany.

This is not intended to be a pure developer meeting, we also hope to reach out to manufacturers of processors, chipsets, mainboards and servers/laptops/tablets/desktops with an interest in coreboot and the possibilities it offers.

My plan (which is not final yet) is to have the Federal Office for Information Security (BSI) in Germany host the conference in Bonn, Germany. As a national cyber security authority, the goal of the BSI is to promote IT security in Germany. For this reason, the BSI has funded coreboot development in the past for security reasons.

The preliminary plans are to coordinate the exact date of the conference to be before or after Embedded Linux Conference Europe, scheduled for October 5-7 in Dublin, Ireland. Planned duration is 3 days. This means we can either use the time window from Thursday Oct 1 to Sunday Oct 4, or from Thursday Oct 8 to Monday Oct 12. The former has the advantage of having cheaper hotel rooms available in Bonn, while the latter has the advantage of avoiding Oct 3, a national holiday in Germany (all shops closed). UPDATE: Preliminary dates are Friday Oct 9 to Sunday Oct 11. The doodle has been updated accordingly. Thursday and Monday could be filled with some cultural attractions if desired.

ATTENTION vendors/manufacturers: If your main interest is forging business relationships and/or strategic coordination and you want to skip the technical workshops and soldering, we’ll try make sure there is one outreach day of talks, presentations and discussions on a regular business day. Please indicate that with “(strategic)” next to your name in the doodle linked below.

If you wonder about how to reach Bonn, there are three options available by plane:
The closest is Cologne Airport (CGN), 30 minutes by bus to Bonn main station.
Next is Düsseldorf Airport (DUS), 1 hour by train to Bonn main station.
The airport with most international destinations is Frankfurt Airport (FRA), 2.5 hours by train to Bonn main station.
There’s the option to travel by train as well. Bonn is reachable by high-speed train (ICE), and other high-speed train stations are reasonably close (30 minutes).

What I’m looking for right now is a rough show of hands who’d like to attend so I can book a conference venue. I’d also like feedback on which weekend would be preferable for you. If you have any questions, please feel free to ask me directly <c-d.hailfinger.devel.2006@gmx.net> or our mailing list <coreboot@coreboot.org>.

Please enter your participation abilities in the doodle below:
http://doodle.com/bw52xs4fc7pxte6d

Regards,
Carl-Daniel Hailfinger

Reverse engineering blobs: adding diff to the toolkit

Last time I talked about the benefits of using sed to transform repetitive low-level patterns into meaninful function calls.  And still, doing all that regex magic did not get us a fully working replay. A great portion of the hardware initialization flow is based on situational awareness. What hardware is connected? What are our capabilities? What if …?

That means a simple sequence of writes is, in most cases, not sufficient. We may need to modify registers, wait on other hardware, or respond differently to hardware states. While that seems daunting and tedious, it gives us an unexpected advantage: that every execution of the blob produces a different trace.

This is where diff comes in. By getting a bunch of traces and diffing them, we can see the points where the firmware takes different decisions, and the states which determine those decisions.  It won’t tell us what condition triggers path A or path B, but it allows us to infer that by comparing the hardware states. Let’s have a look:

@@ -305,28 +305,28 @@ void run_replay(void)
 radeon_read_sync(0x6430); /* 04040101 */
 radeon_write_sync(0x6430, 0x04000101);
 radeon_write_sync(0x3f50, 0x00000000);
- radeon_read(0x3f54); /* 000dda12 */
- inl(0x2004); /* 000dda8a */
- inl(0x2004); /* 000ddb1a */
- inl(0x2004); /* 000ddb9c */
- ...
- inl(0x2004); /* 000de3a0 */
- inl(0x2004); /* 000de41a */
+ radeon_read(0x3f54); /* 000d9efb */
+ inl(0x2004); /* 000d9f73 */
+ inl(0x2004); /* 000da003 */
+ inl(0x2004); /* 000da081 */
+ ...
+ inl(0x2004); /* 000da883 */
+ inl(0x2004); /* 000da8fd */
 radeon_read_sync(0x611c); /* 00000000 */
 radeon_write_sync(0x611c, 0x00000002);
 radeon_write_sync(0x6ccc, 0x00007fff);

I’ve chosen an example that shows similarities rather than differences, as I find this to be a more interesting case. Since we’ve already established that 0x2004 is our data port, as long as we don’t touch the index port, we’ll be reading from the same register, in this case 0x3f54.

Now the values returned by this register are completely different in every trace, yet the behavior of the blob is strikingly similar every time. The first key observation is that this register increase monotonically in every trace. It also increases by roughly the same amount on successive reads. The differences between the last read and first read of the register are also strikingly similar in both traces: 0xa08 and 0xa02 respectively.

This register looks to be a monotonic timer, and the loop has all the elements of a delay loop. To determine the actual delay, we could try to extract absolute timing information when collecting the trace; however, in this specific case, I had the AtomBIOS tables handy. By comparing register accesses around this loop, I was able to figure out where in the tables this delay is occuring:

 0200: 0d250c1901 OR reg[190c] [...X] <- 01
 0205: 54300c19 CLEAR reg[190c] [.X..]
 0209: 5132 DELAY_MicroSec 32

The ’32’ in the delay is a hex number. Doing a bit of hex math we see we’re waiting about 51 ticks per microsecond. Comparing more loops, we get between 50 to 52 ticks per microsecond. Since a delay loop normally waits until the minimum time has elapsed, we now have a very convincing case that register 0x3f54 implements a 50MHz monotonic timer. Every time before accessing this register, we also poke register 0x3f50. That looks very much like the timer control register.

We now extend our sed script with:

timerctl=0x3f50
timer=0x3f54
...
sed "s/radeon_read($timer);[^$hex\r]*\([$hex]*\)[^\r]*\(\r\tinl($dport);[^$hex\r]*\([$hex]*\)[^\r]*\)*/radeon_delay(0x\3 - 0x\1);/g" |
sed "s/radeon_write_sync($timerctl, 0x0\{1,8\});[^\r]*\r\tradeon_delay(/radeon_delay(/g"

Now when we rerun our logs through the script, the results decrease in size from 20K lines to 13K lines. The diffs between processed logs also decrease in size significantly. All the more proof we were right!

There’s another way in which diff is excellent for our purpose. We can implement our helpers to generate the same output as the processed logs. That allows us to poke the replay from userspace, yet get the same output format. Now we can diff the replay and original log, and observe how the hardware state changes. We can even go as far as implementing our delay with usleep() instead of the timer at 0x3f54. When the diff is independent of the delay method we use, we have another strong proof our assumptions are true. This is the case here.

‘diff’ is an extremely powerful tool. Despite its name, it can show similarities just as well as differences. While regular expressions exaust their usability with simple patterns, diff can take us a lot further. Now that we’ve cracked the delay implementation of the blob, we can more easily see delay and wait loops — again, using diff. Complex, multivariable patterns are too awkward to handle with sed. I’ll go over those some other time. However, once such patterns are simplified to a function call, diff can once again show the story. Different GPU model? diff. New display? diff. HDMI connected? diff. It’s almost as versatile as det cord .

cbfs_media [Week 1]

This post covers the complete set-up and building coreboot for cubieboard. Due to lack of documentation for this, I had to spend sometime figuring out the details; hence decided to write it myself to help others in the future.

  1. Step 1: Build Payload

As mentioned here, what we have to do first is to build a payload to use later for coreboot.rom. A suitable ARM payload is the sunxi/uboot. Now, there are two ways to build uboot: natively or from another system. To build from another system, we need to get a suitable toolchain. For that you need to do:

apt-get install gcc-arm-linux-gnueabihf

Too many issues are faced to get this toolchain set up right 😐  A more suitable and convenient method is to build uboot natively from the cubieboard itself (thanks #cubieboard for the tip :P).  For this follow the instructions here. tl;dr Clone repository, choose you target board, make (without CROSS_COMPILE). This completes building the payload.  NOTE: The correct file to use as the payload is “u-boot”, not u-boot.bin. The “u-boot” file is the non-SPL part of uboot in elf format. The log for successful build can be seen here.

file u-boot
u-boot: ELF 32-bit LSB shared object, ARM, version 1 (SYSV), dynamically linked (uses shared libs), not stripped

2.  Step 2: Build coreboot

Before following the instructions on the coreboot/Build_HOWTO, you first need the latest development code; which contains the mmc driver needed to load romstage, etc.

You need to make crossgcc first. This might take a lot of time: be patient 😛 Some missing toolchain errors can arise. Get past them by:

apt-get install bison flex patch
add-apt-repository ppa:linaro-maintainers/toolchain

Once this is done, set your suitable configuration in make menuconfig. Make sure to disable CONFIG_VGA_ROM_RUN (set by default), since it doesnt work for ARM boards. Just make and wait. Your coreboot.rom is ready. 🙂

This image needs to be placed on the SD card:

dd if=build/BOOT0 of=/path/to/sdcard/blockdev bs=1024 seek=8

Now pfff! this is enough to get coreboot up and running! 😀

For the next week; our plan is to Identify locations of the map() and read() calls; and to determine size of each map(). The driver is currently configured to pull the entire cbfs into ram; so we work to reduce size of these mappings. 

GSoC [early debugging] The very short introduction

Oh dear, what did I get into again. My GSoC 2014 project page  gives you an idea of things to expect during this summer of code and seems like I have promised to deliver a lot this time. More like a complete in-circuit-debugging solution of x86 boot firmware over some readily available and low-cost USB hardware. I only scratched the surface with my GSoC 2013 when I did not get much further than a working usbdebug and some intense clean-up and preparation on the CBMEM side.

There has been serious use of usbdebug combined with SerialICE to troubleshoot and/or reverse-engineer proprietary firmwares. Tests have been done to connect GDB stub built into coreboot over BeagleBone even before debug target has initialized RAM.  Also other pieces of my project plan have already seen proof-of-concepts but the quality or the flexibility have not reached the requirements to see them in widespread use in coreboot community.

We can see more practical uses for SerialICE if we could connect QEMU, GDB, SerialIce and radare together, and visualize some of the system bus topologies at runtime. With the amount of support CPU and chipset vendors have shown towards open-source firmware development the last years, I consider this as a key part for any further community-driven mainboard ports on coreboot.

GSoC (coreboot): Test interface board complete

Apologies for the late update. The design that I posted in the last post was more challenging than I had thought. However I’m happy to announce that my test hardware that I call ‘coreboot test interface board’ (TIB) is now complete. Only some of the software interface part is remaining in the project. So let me share with you a very quick update of last month. Continue reading GSoC (coreboot): Test interface board complete

A brief progress sheet

Last week, I laid out a list of things to do in order to get more of the protocol finalized. Most of the items are crossed off, however, one little item remains standing. This item, while innocent and seemingly harmless is more painful than falling on your buttocks from a 10 story building on solid granite. Let’s have a look at why this small item is of such significance.

  • new API call set_chip_size()

When people think of QiProg, they think of one gadget with one flash chip connected. This is the common case, and, for the foreseeable future, will be the de-facto way of using QIProg. However, the original USB specification was intended for a broader use case: a programmer with several, individually addressable chips connected. One who observes the qiprog_read_chip_id() call will notice that it translates to a READ_CHIP_ID request over USB. This request will return identification data for up to nine chips. Aye, there’s the rub.

How does this play into set_chip_size()? Simple, set_chip_size for which chip? Do we send a flat list of nine uint32_t sizes, thus only needing one round-trip (control request) for all chips? Do we use the wIndex field of the round-trip, at the cost of needing one such trip for each chip? Once this question is answered, it will determine the answer for set_[erase/write]_[size/command] call and their respective USB round-trips, thus completing the USB protocol, and bringing QiProg to a usable state.

It’s easy to see why this one little detail is a blocker for all other remaining issues. I am leaning towards the use of wIndex (not the glass cleaner). Implementing a new control request in software and firmware is a matter of minutes. Testing it, and making sure it works properly is, at most, a two hour endeavor. Getting the design right: priceless.

Experiments of mind

The time for writing code is over. The time to design hardware is over. After seven weeks, the vultureprog_action_shotbeginning has come to an abrupt end. I am severely behind schedule. In week seven I was supposed to implement erase functionality — tell the programmer how to erase the chip. This is not done. On the other hand, I have had code for weeks 8 and 9 almost ready, and just merged most of it last week. So, where am I? Am I ahead or behind schedule?

The fallacy of preemption

One of the requirements for applying as a GSoC coreboot student was to have a fully established, vultureprog_probingschedule from day[-1]. Establishing this schedule was a great experience, and it allowed me to think in depth about the problem and possible solutions — to a certain degree. I picked the steps I considered logical, in the order which I saw logical. Development is never about writing code in the order in which it will be executed. In this particular case, it was much easier to implement bulk writing without a predefined erase/write strategy, opting instead for a default just-do-it approach.

Why is this approach better than following the schedule, from a development point of view? We have had bulk read partially working for a while now. From the host point of view, reading and writing are symmetrical operations. The bulk of the code (pun definitely intended) is shared between the read and write operation. They both juggle data on the same endpoint. The only difference is the endpoint direction bit. It therefore made sense, once bulk reading was fixed for corner cases, to uses the same code to send data to the programmer. Making the programmer write that data was a matter of a couple of hours. There was no sensible reason to wait an additional two weeks before implementing this last bit.

Software development work is as much about making things work, as it is about the application of programming principles with unquestionable moral authority and correctness. In this case, implementing a trivial extension reusing code fresh in my mind was the preferred approach. Not only did it save me time by not having to re-examine the situation a few weeks from now, it also allows me to have a working program/verify scenario when implementing the erase strategies. As one might imagine, this makes the problem a lot easier. Attempting to preempt and enforce a schedule before the problem is thoroughly explored, occasionally conflicts with best practices of development. With this in mind, I am neither behind, nor ahead of schedule. I am precisely where I need to be.

A matter of experimentation

Most of the infrastructure and code is already in place. Bringing QiProg to completion is no longer an issue of adding functionality through code, but rather completing functionality by connecting the existing code. One issue I discovered after testing the bulk program code was a terrible race condition between read prefetching and the write loop. The prefetch logic incremented the internal address before data arrived. As a result, the new data would get written at the wrong address. Choosing the best solution to the problem is a matter of experimentation.

The “this won’t work because of that” and “what if this” turned into a series of exhausting thought experiments. I have been bugging Peter a lot in the past few days about a series of potential issues. Through tiring thought experimentation, we eventually agreed that the best way to proceed was to abstract a lot more through the API. This is a non-exhaustive list of the decisions we’ve made in the past week:

  • set_address() is hidden from the API
  • the internal address range is not exhausted once read or written
  • read and write operations must not be interdependent, the internal read and write pointers will be distinct (as a side effect, this change also eliminates the race condition depicted above)
  • set_address() + readn() turns into read(dev, where, n)
  • All API addresses begin at 0. The programmer translates that into an absolute address
  • new API call set_chip_size()
  • new API call to explicitly erase blocks or sectors (to be defined)
  • implicit erase on write can be enabled or disabled (to be defined)
  • implicit erase will erase the sector/block right before the first byte of the sector is written
  • exposing any USB specific dependencies in the API is strictly forbidden

My focus for the remainder of this week will be to shorten this list as much as possible. Once the dependency between read and write is unshackled, I will be able to erase/program/verify my faithful SST 49LF080A. From here, it will be a matter of finalizing and implementing the last obscure bits of the specification.

The state of QiProg for flashrom

As QiProg is still being finalized, implementing it as a flashrom programmer is still a long ways ahead. I do estimate that weeks 11 and 12 will provide ample time to integrate everything into flashrom, hopefully, in time for the 0.9.8 release.

VultureProg command center

command_center
Upgraded VultureProg command center

It was time to upgrade. I got a new desk last week. For me, a piece of furniture is as boring as watching stainless steel rust (it does rust eventually). A desk, on the other hand, is anathema to replace. I have 20+ wires connected to my workstation going in all sorts of places. I like to keep these wires carefully routed and out of sight, a task made all the more difficult by the lack of any wire management gizmos on new desks. Consequently, after I get a new desk, you are well within your rights to imagine me grabbing my drilling machine, a set of saw cutter bits and an assortment of hole covers — and you need not wait long to see it. Needless to say, getting back up and running has taken a few days, but we are back with a shiny new VultureProg command center (pictured above for your viewing convenience).

The 80-20 rule

In 2011 I used to work developing Android apps. My boss told me “you will spend 20% of the time writing 80% of the code, and you will spend the remaining 80% of the time writing the remaining 20% of the code”. Sadly, QiProg has proven to not be an exception to this rule. Although I have consistently fixed issues and improved the layout of the code, the number of lines of code has been more or less stagnating after the first three weeks of exponential growth. I know how to erase the chip, I know how to program the chip, I know how to read the chip, and I have code to do all that. I just have not yet had the chance to hook it up to the ecosystem.

I am currently working on improving the bulk reading code. There are a few corner cases which are not well handled. As expected, it is taking a lot of time, and I even had to write a special testcase before I started.

Did you want fries with that?

Although we know how to handle every single aspect of the identify/read/erase/write/verify cycle, finding a way of connecting everything together in a simple, elegant, and efficient manner is a different story altogether. We have three API calls for handling erase and write, namely qiprog_set_erase_sizeqiprog_set_erase_command, and qiprog_set_write_command. Peter wants everything to work without requiring an explicit erase. In his view, the VultureProg firmware should automatically erase a sector that is being written. Although this would simplify dumb cases where the whole chip or large sections of a chip are written at once, it spells disaster for flashrom’s aggressive optimization. What happens if the first part of a block matches contents, and flashrom decides to write the other part without needing an erase? VultureProg would erase the entire block, then write the second part to the block we never meant to erase. Questions such as this one need to be carefully thought over and elegantly answered. If I blindly start putting all the code together right now, I’ll most certainly have to fix it later.

Postal patronage

I received a very interesting package last week. I had a deal with Idwer to get him rid of a few LPC and FWH flash chips. I was able to take them off his hands for just a few euros, a deal made sweeter by the fact that those chips are no longer being sold anywhere. I also sent Peter a VultureProg board. Since he already has a Stellaris or two, I have just recruited an eager tester.

Can you send me a board?

Short answer, no. Just grab the gerbers and take them to SeeedStudio. They will be able to get boards in your hands for much less than what I pay to ship them. The two SMD capacitors are not hard to assemble. If you still want me to send you an assembled PCB, I will charge $50 (international) or $30 (US). I don’t have the physical time to sit and assemble PCBs by hand, plus, it would be very un-geekish of you to not assemble your own.

VultureProg: Equipped for galactic travel

vultureprog_ready_for_launchIt’s here! And it’s ready for takeoff. The VultureProg PCBs have finally arrived, and it is time to turn VultureProg from a proof-of-concept toy to a serious galactic tool. My major concern was that I could have misrouted one or two connections. The LAD pins are particularly sensitive, as they need to be mapped to sequential GPIO pins, and start from GPIO0, otherwise we need to do bitshitfs in every LPC cycle, killing any hope of decent performance. I was also worried that the particularly tight tolerances could be problematic during manufacture. Everything works as expected. Enough words, let’s see the porn.

vultureprog_hull
The bare PCBs
vultureprog_shuttle
Brings back memories, doesn’t it?
vultureprog_shuttle_angle
The world’s first fully assembled VultureProg board

From bitstream to reality

I found it interesting to look at the transformation from a spaghetti on the screen to something real, something tangible.

vultureprog_shuttle_sbs
From conception to birth: the complete transformation

No thorough and thoughtful post today. I have a new toy.

 

 

QiProg: Expanding the flight range


Today is disappointing. I was expecting to have gotten the first batch of VultureProg PCBs. Having fast_faster_vultureprogarrived in the US on Tuesday, I did not expect the package to be hovering in New York for most of this week. The need for eating the spaghetti and eliminating the hanging wires is growing more and more urgent with each speed increase. The long wires and insane inductance is already getting in the way of the signals, the lack of physical portability of the test setup is annoying, and the pain of accidentally knocking out a few wires is unbearable. Where are my PCBs ?!!!

High speed tests

The Stellaris is getting faster, much faster than initially predicted. From the pathetic 23 KiB/s read speed during the first tests, it now comfortably does over 450 KiB/s. The emulated LPC bus goes so fast, that I need to lower the CPU’s core clock down to a measly 16MHz to be able to just capture all the details of the waveforms.

Too much impedance

24MHz logic analyzer + Nyquist theorem = 12+MHz LPC clock

We’re pushing a 12MHz signal through 20+ centimeter hanging wires, routing them through a solder-less breadboard, then a socket, with the added load of probe wires. I was getting random errors or bad data, yet as soon as I disconnected the probe wires, everything magically worked. It must have had something to do with signal rise and fall times. On the Stellaris, this was relatively easy to cheat and fix by increasing the drive strength.

Where’s my bandwidth?

450 KiB/s * 1024 * 17 clocks/byte = 8 MHz LPC clock

Something does not add up. We know we’re running at over 12MHz because we can not sample the signal well, yet the throughput is much smaller. The USB on the Stellaris is easily capable of 1 MiB/s. To quote Seconds from Disaster, “when investigators looked at […], what they found shocked them”:

vultureprog_idle_bus

 

The LPC bus is idle 30% of the time. We’re driving the bus so fast, that loop overhead is not only noticeable, but significant. Killing this overhead has the potential to bring the speed to 600 KiB/s.

Insect season

It’s a hot summer in Houston, with air conditioning running around the clock. Stepping in the hot, wet weather outside results in an instant cascade of sweat. Insects are crawling from every nick and crevice, fire ants are spawning from the underground in huge mounds, and mosquitoes are raising their own deadly army of high-pitched buzzers. The QiProg and VultureProg trees are no different.

I’ve made the executive decision to fix bugs as soon as they are getting in the way. I intentionally avoid the use of the term found. I can find my own damn bugs, however, it’s fixing them that is the problem. I prefer to have all the pieces in place before I polish and shine them.

A ripe testing ground

Out of curiosity, I wanted to see if VultureProg will work on Install’n’Pray operating systems (InP). InP are known for their excellent ability to expose even the smallest, most innocent problems, and expose problems where there are none. If VultureProg can work in an InP environment, it most certainly will be stable in unix-like environments. Although it did not work at first, testing on InP has lead to a number of  fixes.

promiscuous_vulture

Where next?

We have bulk reading completed, which was initially scheduled for week 6. Weeks 7 and 8 look scary, with a lot of goodies, including program and erase functionality. I’ve decided to peek early into how to program and erase LPC chips. I found flashrom’s jedec.c to be of great help: QiProg knows how to byte-program and erase JEDEC-compliant chips. A lot is still scattered in topic branches here and there, waiting for one’s mercy to merge. Somebody please send me some coffee.