[GSoC] Address Sanitizer, Wrap-up

Hello everyone. The coding period for GSoC 2020 is now officially over and it’s time for the final evaluation. I’ll use this blog post to summarize the project details, illustrate the instructions to use ASan, and discuss some ideas on what can be done further to enhance this feature.

You can find the complete list of commits I made during GSoC with this Gerrit query.


Project details

Memory safety is hard to achieve. We, as humans, are bound to make mistakes in our code. While it may be straightforward to detect memory corruption bugs in few lines of code, it becomes quite challenging to find those bugs in a massive code. In such cases, ‘Address Sanitizer’ may prove to be useful and could help save time.

Address Sanitizer, also known as ASan, is a runtime memory debugger designed to find out-of-bounds accesses and use-after-scope bugs. Over the past couple of weeks, I’ve been working to add support for ASan to coreboot. You can read my previous blog posts (Part 1, Part 2, and Part 3) to see my progress throughout the summer.

Here is a description of the components included in the project:

GCC patch

The design of ASan in coreboot is based on its implementation in Linux kernel, also known as Kernel Address Sanitizer (KASAN). However, we can’t directly port the code from Linux.

Unlike the Linux kernel which has a static shadow region layout, we have multiple stages in coreboot and thus require a different shadow offset address. Unfortunately, GCC currently only supports adding a static shadow offset at compile time using -fasan-shadow-offset flag. Therefore the foremost task was to add support for dynamic shadow offset to GCC.

We enabled GCC to determine the shadow offset address at runtime using a callback function named __asan_shadow_offset. This supersedes the need to specify this address at compile time. GCC then makes use of this shadow offset in its internal mem_to_shadow translation function to poison stack variables’ redzones.

The patch further allowed us to place the shadow region in a separate linker section. This ensured if a platform didn’t have enough memory space to hold the shadow buffer, the build would fail.

The way the patch was introduced to GCC’s code base ensures that if
one compiles a piece of code with the new switch enabled i.e. --param asan-use-shadow-offset-callback=1 but has not applied the patch itself to GCC, the compiler will throw the following error because the newly introduced switch is unknown for an out of box GCC: invalid --param name 'asan-use-shadow-offset-callback‘.

I believe this patch might also be useful to the developers who contribute to other open-source projects. Hence, I’ve put this patch on GCC’s mailing list and asked GCC’s developers to include this feature in their upcoming release.

ASan in ramstage

Since ramstage uses DRAM, regardless of the platform, it should always have enough room in the memory to hold the shadow buffer. Therefore, I began by adding support for ASan in ramstage on x86 architecture.

To reserve space in memory for the shadow region, I created a separate linker section and named it asan_shadow. Here, instead of allocating shadow memory for the whole memory region which includes drivers and hardware mapped addresses, I only defined shadow region for the data and heap sections.

Then I started porting KASAN library functions, tweaking them to make them suitable for coreboot.

The next task was to initialize the shadow memory at runtime. I created a function called asan_init which unpoisons i.e. sets the shadow memory corresponding to the addresses in the data and heap sections to zero.

In the case of global variables, instead of poisoning the redzones directly, the compiler inserts constructors invoking the library function named __asan_register_globals to populate the relevant shadow memory regions. So, I wrote a function named asan_ctors which calls these constructors at runtime and added a call to this function to asan_init().

After doing some tests, I realized that compiler’s ASan instrumentation cannot insert asan_load or asan_store state checks in the memory functions like memset, memmove and memcpy as they are written in assembly. So, I added manual checks using the library function named check_memory_region for both source and destination pointers.

ASan in romstage

Once I had ASan in ramstage working as expected, I started adding support for ASan to romstage.

It was challenging because of two reasons. First, even within the same architecture, the size of L1 cache varies across the platforms from 32KB in Braswell to 80KB in Ice Lake and thus we can’t enable ASan in romstage for all platforms by doing tests on a handful of devices. Second, the size of a cache is very small compared to RAM making it difficult to fit asan_shadow section in the limited memory.

Thankfully, the latter issue, to a large extent, was solved by our GCC patch which allowed us to append asan_shadow section to the region already occupied by the coreboot program and make efficient use of limited memory.

Now to resolve the first issue, I introduced a Kconfig option called HAVE_ASAN_IN_ROMSTAGE to denote if a particular platform supports ASan in romstage. This allowed us to enable ASan in romstage only for the platforms which have been tested.

Based on the hardware available with me and my mentor, I enabled ASan in romstage for Haswell and Apollo Lake platforms, apart from QEMU.


Project usage

Instructions for how to use ASan are included in ASan documentation. I’ll restate them with an example here.

Suppose there is a stack-out-of-bounds error in cbfs.c that we aren’t aware of. Let’s see if ASan can help us detect it.

int cbfs_boot_region_device(struct region_device *rdev)
{
	int stack_array[5], i;
	boot_device_init();

	for (i = 10; i > 0; i--)
		stack_array[i] = i;

	return vboot_locate_cbfs(rdev) &&
	       fmap_locate_area_as_rdev("COREBOOT", rdev);
}

First, we have to enable ASan from the configuration menu. Just select Address sanitizer support from General setup menu. Now, build coreboot and run the image.

ASan will report the following error in the console log:

ASan: stack-out-of-bounds in 0x7f7432fd
Write of 4 bytes at addr 0x7f7c2ac8

Here 0x7f7432fd is the address of the last good instruction before the bad access. In coreboot, stages are relocated. So, we have to normalize this address to find the instruction which causes this error.

For this, let’s subtract the start address of the stage i.e. 0x7f72c000. The difference we get is 0x000172fd. As per our console log, this error happened in the ramstage. So, let’s look at the sections headers of ramstage from ramstage.debug.

 $ objdump -h build/cbfs/fallback/ramstage.debug

build/cbfs/fallback/ramstage.debug:     file format elf32-i386

Sections:
Idx Name          Size      VMA       LMA       File off  Algn
  0 .text         00070b20  00e00000  00e00000  00001000  2**12
                  CONTENTS, ALLOC, LOAD, RELOC, READONLY, CODE
  1 .ctors        0000036c  00e70b20  00e70b20  00071b20  2**2
                  CONTENTS, ALLOC, LOAD, RELOC, DATA
  2 .data         0001c8f4  00e70e8c  00e70e8c  00071e8c  2**2
                  CONTENTS, ALLOC, LOAD, RELOC, DATA
  3 .bss          00012940  00e8d780  00e8d780  0008e780  2**7
                  ALLOC
  4 .heap         00004000  00ea00c0  00ea00c0  0008e780  2**0
                  ALLOC

Here the offset of the text segment is 0x00e00000. Let’s add this offset to the difference we calculated earlier. The resultant address is 0x00e172fd.

Next, we read the contents of the symbol table and search for a function having an address closest to 0x00e172fd.

 $ nm -n build/cbfs/fallback/ramstage.debug
........
........
00e17116 t _GLOBAL__sub_I_65535_1_gfx_get_init_done
00e17129 t tohex16
00e171db T cbfs_load_and_decompress
00e1729b T cbfs_boot_region_device
00e17387 T cbfs_boot_locate
00e1740d T cbfs_boot_map_with_leak
00e174ef T cbfs_boot_map_optionrom
........

The symbol having an address closest to 0x00e172fd is cbfs_boot_region_device and its address is 0x00e1729b. This is the function in which our memory bug is present.

Now, as we know the affected function, we read the assembly contents of cbfs_boot_region_device which is present in cbfs.o to find the faulty instruction.

 $ objdump -d build/ramstage/lib/cbfs.o
........
........
  51:   e8 fc ff ff ff          call   52 <cbfs_boot_region_device+0x52>
  56:   83 ec 0c                sub    $0xc,%esp
  59:   57                      push   %edi
  5a:   83 ef 04                sub    $0x4,%edi
  5d:   e8 fc ff ff ff          call   5e <cbfs_boot_region_device+0x5e>
  62:   83 c4 10                add    $0x10,%esp
  65:   89 5f 04                mov    %ebx,0x4(%edi)
  68:   4b                      dec    %ebx
  69:   75 eb                   jne    56 <cbfs_boot_region_device+0x56>
........

Let’s look for the last good instruction before the error happens. It would be the one present at the offset 62 (0x00e172fd0x00e1729b).

The instruction is add $0x10,%esp and it corresponds to for (i = 10; i > 0; i--) in our code. It means the very next instruction i.e. mov %ebx,0x4(%edi) is the one that causes the error. Now, if you look at C code of cbfs_boot_region_device() again, you’ll find that this instruction corresponds to stack_array[i] = i.

Voilà! we just caught the memory bug using ASan.


Future work

While my work for GSoC 2020 is complete, I think the following extensions would be useful for this project:

Heap buffer overflow

Presently, ASan doesn’t detect out-of-bounds accesses for the objects defined in heap. Fortunately, the support for these types of memory bugs can be added easily.

We just have to make sure that whenever some block of memory is allocated in the heap, the surrounding areas (redzones) are poisoned. Correspondingly, these redzones should be unpoisoned when the memory block is de-allocated.

Post-processing script

Unlike Linux, coreboot doesn’t have %pS printk format to dereference a pointer to its symbolic name. Therefore, we normalize the pointer address manually as I showed above to determine the name of the affected function and further use it to find the instruction which causes the error.

A custom script can be written to automate this process.

Support for other platforms and architectures

Jenkins builder built successfully for all x86 boards except for the ones that hold either Braswell SoC or i440bx northbridge where the cache area got full and thus couldn’t fit the asan_shadow section. It shows that support for ASan in romstage can be easily added to most x86 platforms. We just have to test them by selecting HAVE_ASAN_IN_ROMSTAGE option and resolve the compilation errors if any.

Enabling ASan in ramstage on other architectures like ARM or RISC-V should be easy too. We just have to make sure the shadow memory is initialized as early as possible when ramstage is loaded. This can be done by making a function call to asan_init() at the appropriate place.

Similarly, ASan in romstage can be enabled for other architectures. I have mentioned some key points in ASan documentation which could be used by someone who might be interested in doing so.

For the platforms that don’t have enough space in the cache to hold the asan_shadow section, we have to come up with a new translation function that uses a much compact shadow memory. Since the stack buffers are protected by the compiler, we’ll also have to create another GCC patch forcing it to use the new translation function for this particular platform.


Acknowledgement

I’d like to thank my mentor Werner Zeh for his continued assistance during the past 13 weeks. This project certainly wouldn’t have been possible without his valuable suggestions and the knowledge he shared. I’d also like to thank Patrick Georgi for helping me with the work authorization initially and later supervising my work during the time when Werner was on vacation.

Further, I am grateful to every member of the community for assisting me whenever I got stuck, reviewing my code, reading my blogs, and sharing their feedback.


It has been an amazing journey and I look forward to contributing to coreboot in the future.

[GSoC] Address Sanitizer, Part 3

Hello again! The third and final phase of GSoC is coming to an end and I’m glad that I made it this far. In this blog post, I’d like to outline the work done in the last two weeks.


Memory functions

Compiler’s ASan instrumentation cannot insert asan_load or asan_store memory checks in the memory functions like memset, memmove and memcpy. This is because these functions are written in assembly.

While Linux kernel replaces these functions with their variants which are written in C, I took a different approach.

In coreboot, the assembly instructions for these memory functions are embedded into C code using GNU’s asm extension. This provided me with an opportunity to use the ASan library function named check_memory_region to manually check the memory state before performing each of these operations. At the start of each function, I added the following code snippet:

#if (ENV_ROMSTAGE && CONFIG(ASAN_IN_ROMSTAGE)) || (ENV_RAMSTAGE && CONFIG(ASAN_IN_RAMSTAGE))
	check_memory_region((unsigned long)src, n, false, _RET_IP_);
	check_memory_region((unsigned long)dest, n, true, _RET_IP_);
#endif

This way neither I had to fiddle with the assembly instructions nor I had to replace these functions with their C variants which might have caused some performance issues. [CB:44307]


Documentation

Since I finished a little early with what I had proposed to deliver, Werner suggested that I should write documentation on ASan and I am happy that he did. When I read the intro of the documentation guidelines, I realized how a feature as significant as ASan might go unnoticed and unused by many if it lacks proper documentation.

In ASan documentation, I have tried my best to answer questions like how to use ASan, what kind of bugs can be detected, what devices are currently supported, and how ASan support can be added to other architectures like ARM or RISC-V.

The documentation is not final yet and I’d really appreciate inputs from the community. So, please give it a read and share your feedback. [CB:44814]


In the end, I’d like to announce that ASan patches have been merged into the coreboot source tree. You can go ahead and make use of this debugging tool to look for memory corruption bugs in your code.

[ASan patches]

[GSoC] Address Sanitizer, Part 2

Hello again! Its been a month since my last blog post. So, there are many updates I’d like to share. I’ll first cover the Address Sanitizer (ASan) algorithm in detail and then summarize the progress made until the second evaluation period.


If you recall my last post, I had briefly talked about the principle behind ASan. Now, let us discuss the ASan algorithm in much more depth. But first, we need to understand the memory layout and how the compiler’s instrumentation works.

Memory mapping

ASan divides the memory space into 2 disjoint classes:

  • Main Memory (mem): This represents the original memory used by our coreboot program in a particular stage.
  • Shadow Memory (shadow): This memory region contains the shadow values. The state of each 8 aligned bytes of mem is encoded in a byte in shadow. As a consequence, the size of this region is equal to 1/8th of the size of mem. To reserve a space in memory for this class, we added a new linker section and named it asan_shadow.

This is how we added asan_shadow in romstage:

#if ENV_ROMSTAGE && CONFIG(ASAN_IN_ROMSTAGE)
	_shadow_size = (_ebss - _car_region_start) >>  3;
	REGION(asan_shadow, ., _shadow_size, ARCH_POINTER_ALIGN_SIZE)
#endif

and in ramstage:

#if ENV_RAMSTAGE && CONFIG(ASAN_IN_RAMSTAGE)
	_shadow_size = (_eheap - _data) >> 3;
	REGION(asan_shadow, ., _shadow_size, ARCH_POINTER_ALIGN_SIZE)
#endif

The linker symbol pairs (_car_region_start, _ebss) and ( _data, _eheap) are references to the boundary addresses of mem in romstage and ramstage respectively.

Now, there exists a correspondence between mem and shadow classes and we have a function named asan_mem_to_shadow that performs this translation:

void *asan_mem_to_shadow(const void *addr)
{
#if ENV_ROMSTAGE
	return (void *)((uintptr_t)&_asan_shadow + (((uintptr_t)addr -
		(uintptr_t)&_car_region_start) >> ASAN_SHADOW_SCALE_SHIFT));
#elif ENV_RAMSTAGE
	return (void *)((uintptr_t)&_asan_shadow + (((uintptr_t)addr -
		(uintptr_t)&_data) >> ASAN_SHADOW_SCALE_SHIFT));
#endif
}

In other words, asan_mem_to_shadow maps each 8 bytes of mem to 1 byte of shadow.

You may wonder what is stored in shadow? Well, there are only 9 possible shadow values for any aligned 8 bytes of mem:

  • The shadow value is 0 if all 8 bytes in qword are unpoisoned (i.e. addressable).
  • The shadow value is negative if all 8 bytes in qword are poisoned (i.e. not addressable).
  • The shadow value is k if the first k bytes are unpoisoned but the rest 8-k bytes are poisoned. Here k could be any integer between 1 and 7 (1 <= k <= 7).

When we say a byte in mem is poisoned, we mean one of these special values are written into the corresponding shadow.

Instrumentation

Compiler’s ASan instrumentation adds a runtime check to every memory instruction in our program i.e. before each memory access of size 1, 2, 4, 8, or 16, a function call to either __asan_load(addr) or __asan_store(addr) is added.

Next, it protects stack variables by inserting gaps around them called ‘redzones’. Let’s look at an example:

int foo ()
{
char a[24] = {0};
int b[2] = {0};
int i;

a[5] = 1;

for (i = 0; i < 10; i++)
    b[i] = i;

return a[5] + b[1];
}

For this function, the instrumented code will look as follows:

int foo ()
{
char redzone1[32]; // Slot 1, 32-byte aligned
char redzone2[8];  // Slot 2
char a[24] = {0};  // Slot 3
char redzone3[32]; // Slot 4, 32-byte aligned
int redzone4[6];   // Slot 5
int b[2] = {0};    // Slot 6
int redzone5[8];   // Slot 7, 32-byte aligned
int redzone6[7];
int i;
int redzone7[8];

__asan_store1(&a[5]);
a[5] = 1;

for (i = 0; i < 10; i++)
    __asan_store4(&b[i]);
    b[i] = i;

__asan_load1(&a[5]);
__asan_load4(&b[1]);
return a[5] + b[1];
}

As you can see, the compiler has inserted redzones to pad each stack variable. Also, it has inserted function calls to __asan_store and __asan_load before writes and reads respectively.

The shadow memory for this stack layout is going to look like this:

Slot 1:        0xF1F1F1F1
Slots 2, 3:    0xF1000000  
Slot 4:        0xF1F1F1F1
Slots 5, 6:    0xF1F1F100  
Slot 7:        0xF1F1F1F1

Here F1 being a negative value represents that all 8 bytes in qword are poisoned whereas the shadow value of 0 represents that all 8 bytes in qword are accessible. Notice that in the slots 2 and 3, the variable ‘a’ is concatenated with a partial redzone of 8 bytes to make it 32 bytes aligned. Similarly, a partial redzone of 24 bytes is added to pad the variable ‘b’.

The process of protecting global variables is a little different from this. We’ll talk about it later in this blog post.

Now, as we have looked at the memory mapping and the instrumented code, let’s dive into the algorithm.

ASan algorithm

The algorithm is pretty simple. For every read and write operation, we do the following:

  • First, we find the address of the corresponding shadow memory for the location we are writing to or reading from. This is done by asan_mem_to_shadow().
  • Then we determine if the access is valid based on the state stored in the shadow memory and the size requested. For this, we pass the address returned from asan_mem_to_shadow() to memory_is_poisoned(). This function dereferences the shadow pointer to check the memory state.
  • If the access is not valid, it reports an error in the console mentioning the instruction pointer address, the address where the bug was found, and whether it was read or write. In our ASan library, we have a dedicated function asan_report() for this.
  • Finally, we perform the operation (read or write).
  • Then, we continue and move over to the next instruction.

Here’s a pictorial representation of the algorithm:

Note that whether access is valid or not, ASan never aborts the current operation. However, the calls to ASan functions do add a performance penalty of about ~1.5x.

Now, as we have understood the algorithm, let’s look at the function foo() again.

int foo ()
{
char a[24] = {0};
int b[2] = {0};
int i;

__asan_store1(&a[5]);
a[5] = 1;

for (i = 0; i < 10; i++)
    __asan_store4(&b[i]);
    b[i] = i;

__asan_load1(&a[5]);
__asan_load4(&b[1]);
return a[5] + b[1];
}

Have a look at the for loop. The array ‘b’ is of length 2 but we are writing to it even beyond index 1.

As the loop is executed for index 2, ASan checks the state of the corresponding shadow memory for &b[2]. Now, let’s look at the shadow memory state for slot 7 again (shown in the previous section). It is 0xF1F1F1F1. So, the shadow value for the location b[2] is F1. It means the address we are trying to access is poisoned and thus ASan is triggered and reports the following error in the console log:

ASan: stack-out-of-bounds in 0x07f7ccb1
Write of 4 bytes at addr 0x07fc8e48

Notice that 0x07f7ccb1 is the address where the instruction pointer was pointing to and 0x07fc8e48 is the address of the location b[2].


ASan support for romstage

In the romstage, coreboot uses cache to act as a memory for our stack and heap. This poses a challenge when adding support for ASan to romstage because of two reasons.

First, even within the same architecture, the size of cache varies across the platforms. So, unlike ramstage, we can’t enable ASan in romstage for all platforms by doing tests on a handful of devices. We have to test ASan on each platform before adding this feature. So, we decided to introduce a new Kconfig option HAVE_ASAN_IN_ROMSTAGE to denote if a particular platform supports ASan in romstage. Now, for each platform for which ASan in romstage has been tested, we’ll just select this config option. Similarly, we also introduced HAVE_ASAN_IN_RAMSTAGE to denote if a given platform supports ASan in ramstage.

The second reason is that the size of a cache is very small compared to the RAM. This is critical because the available memory in the cache is quite low. In order to fit the asan_shadow section on as many platforms as possible, we have to make efficient use of the limited memory available. Thankfully, to a large extent, this problem was solved by our GCC patch which allowed us to append the shadow memory buffer to the region already occupied by the coreboot program. (I ran Jenkins builder and it built successfully for all boards except for the ones that hold either Braswell SoC or i440bx northbridge where the cache area got full and thus couldn’t fit the asan_shadow section.)

In the first stage, we have enabled ASan in romstage for QEMU, Haswell and Apollolake platforms as they have been tested.

Further, the results of Jenkins builder indicate that, with the current translation function, the support for ASan in romstage can be added on all x86 platforms except the two mentioned above. Therefore, I’ve asked everyone in the community to participate in the testing of ASan so that this debugging tool can be made available on as many platforms as possible before GSoC ends.


Global variables

When we initially added support for ASan in ramstage, it wasn’t able to detect out-of-bounds bugs in case of global variables. After debugging the code, I found that the redzones for the global variables were not poisoned. So, I went through GCC’s ASan and Linux’s KASAN implementation again and realized that the way in which the compiler protects global variables was very much different from its instrumentation for stack variables.

Instead of padding the global variables directly with redzones, it inserts constructors invoking the library function named __asan_register_globals to populate the relevant shadow memory regions. To this function, compiler also passes an instance of the following type:

struct asan_global {
       /* Address of the beginning of the global variable. */
       const void *beg;

       /* Initial size of the global variable. */
       size_t size;

       /* Size of the global variable + size of the red zone. This
          size is 32 bytes aligned. */
       size_t size_with_redzone;

       /* Name of the global variable. */
       const void *name;

       /* Name of the module where the global variable is declared. */
       const void *module_name;

       /* A pointer to struct that contains source location, could be NULL. */
       struct asan_source_location *location;
     }

So, to enable the poisoning of global variables’ redzones, I created a function named asan_ctors which calls these constructors at runtime and added it to ASan initialization code for the ramstage.

You may wonder why asan_ctors() is only added to ramstage? This is because the use of global variables is prohibited in coreboot for romstage and thus there is no need to detect global out-of-bounds bugs.


Next steps..

In the third coding period, I’ve started working on adding support for ASan to memory functions like memset, memmove and memcpy. I’ll push the patch on Gerrit pretty soon.

Once, this is done, I’ll start writing documentation on ASan answering questions like how to use ASan, what kind of bugs can be detected, what devices are currently supported, and how ASan support can be added to other architectures like ARM or RISC-V.

[ASan patches]

[GSoC] Address Sanitizer, Part 1

Hello everyone. My name is Harshit Sharma (hst on IRC). I am working on the project to add the “Address Sanitizer” feature to coreboot as a part of GSoC 2020. Werner Zeh is my mentor for this project and I’d like to thank him for his constant support and valuable suggestions.

It’s been a fun couple of weeks since I started working on this project. Though I found the initial few weeks quite challenging, I am glad that I was able to get past that and learned some amazing stuff I’d cherish for a long time.

Also, being a student, I find it incredible to have got a chance to work with and learn from such passionate, knowledgeable, and helpful people who are always available over IRC to assist.

A few of you may already know about the progress we’ve made through Werner’s message on the mailing list. This is a much comprehensive report on the work we had done prior to the first evaluation period.


What is Address Sanitizer ?

Address Sanitizer (ASan) is a runtime memory debugging tool, designed to find out of bounds and use after free memory bugs. It is a part of toolset Google has developed which also includes Undefined Behaviour Sanitizer (UBSan), a Thread Sanitizer (TSan), and a Leak Sanitizer (LSan).

This tool would help in improving code quality and thus would make our runtime code more robust.


Compile with ASan

The firstmost task was to ensure that coreboot compiles without any errors after Asan is enabled. For this, we created dummy functions and added the relevant GCC flags to build coreboot with ASan. We also introduced a Kconfig option (currently only available on x86 architecture) to enable this feature. (Patch [1])


Porting code from Linux

The design of ASan in coreboot is based on its implementation in Linux kernel, also known as Kernel Address Sanitizer (KASAN). However, coreboot differs a lot from Linux kernel due to multiple stages and that is what poses a challenge.


Design of ASan

ASan uses compile-time instrumentation that adds a run-time check before every memory instructions.

The basic idea is to encode the state of each 8 aligned bytes of memory in a byte in the shadow region. As a consequence, the shadow memory is allocated to 1/8th of the available memory. The instrumentation of the compiler adds __asan_load and __asan_store function calls before memory accesses which seeks the address of the corresponding shadow memory and then decides if access is legitimate depending on the stored state.

If the access is not valid, it throws an error telling the instruction pointer address, the address where a bug is found and whether it was read or write.


Need for GCC patch

Unlike Linux kernel which has a static shadow memory layout, coreboot has multiple stages with very different memory maps. Thus we require different shadow offset addresses for each stage.

Unfortunately, GCC presently, only supports using a static shadow offset address which is specified at compile time using -fasan-shadow-offset flag.

So, we came up with a GCC patch that introduces a feature to use a dynamic shadow offset address. Through this, we enable GCC to determine shadow offset address at runtime using a callback function __asan_shadow_offset().

Though we’ve presented our patch to GCC developers to convince them to include this change in their upcoming version, for now, ASan on coreboot only works if this patch is applied. (Patches [2] and [3])


Allocating and initializing shadow buffer

Instead of allocating a shadow buffer for the whole memory region, we decided to only define shadow memory for data and heap sections. Since we don’t want to run ASan for hardware mapped addresses, this way we save a lot of memory space.

Once the shadow region is allocated by the linker, the next task is to initialize it as early as possible at runtime. This is done by a call to asan_init() which unpoisons i.e. sets the shadow memory corresponding to the addresses in the data and heap sections to zero.


ASan support for ramstage

We decided to begin in a comparatively simpler stage i.e. ramstage. Since ramstage uses DRAM, the implementation is common across all platforms of a particular architecture. Moreover, ramstage provides enough room in the memory to place our shadow region.

For now, we have only enabled this feature for x86 architecture and it is able to detect stack out of bounds and use after scope bugs. Though the patches are still in the review stage, we did a test on QEMU and siemens mc_apl3 (Apollo Lake based) and so far it looks good to us. (Patch [4])


Next steps..

In the second coding period, we’ve started working on adding ASan to romstage. I will push a change on Gerrit once we make some considerable progress.


We further encourage everyone in the community to review the patches and test this feature on their hardware for better test coverage. The more feedback we get, the more are the chances to have a high-quality debugging tool.