Firstly, sorry for the delay in posting update on the work. I had been busy getting the design to code and wanted to post after its successful completion.
As I had talked about in the previous post, we did a detailed analysis on the existing read() and map() calls. The original log; with all the extra gibberish removed can be seen here. The first design modification that was done was to remove the mapping done for getting cbfs_header. These were the 0x20 size mappings we see in the log. These were unnecessary and could be done away with. And we did! 😛 This log shows the first optimized build; Stage 1 -> Part 1 ->done.
Now we moved on to the more complex and colossal mappings. A function cbfs_find_file() was created, which returned the absolute data_offset of the file based on the name and type we ask for. Once we have the whereabouts of the file; modifications were made in cbfs_load_stage() to appropriately read() and/or map() various files.
The files are arranged as -> [ cbfs_file ] [ cbfs_stage ] [ data ] <Thanks Aaron for this visualization >
cbfs_find_file() : worked with the cbfs_file to get details about the whereabouts of the file
cbfs_load_stage() : we first read fundamental information about the stage; and then do corresponding map() or read()
Voila!! Stage 1 Complete! 😀
Now, the major issue we have persisting is that the decompression of file data assumes memory mapped access to its contents, and hence is quite inefficient due the that ‘one’ large buffer. SO this is what we tackle next, to be more precise, have a pipelined decompression strategy which would eliminate the need for one large data buffer.
Its getting fascinating to work on the project by the day! Until the next post, signing off.
P.S. Thanks Aaron for helping out with any and every issue I face, and always finding the time to reply, even on sundays! 😀