Ned and Chris explore a newly discovered flaw in UEFI Secure Boot that’s led to a critical OEM blunder that allows rootkit attacks, and the only fix is a potentially daunting firmware update.
Secure Boot’s Achilles’ Heel
Ned and Chris dive into a freshly uncovered flaw in the Secure Boot process of PCs using UEFI firmware. They trace the evolution of boot processes from ENIAC’s manual grind to today’s automated systems, highlighting the crucial role of cryptographic keys in blocking unauthorized code. Along the way, they expose a serious blunder where some OEMs carelessly included untrusted platform keys in their UEFI firmware, opening the door to rootkit attacks. The fix? A firmware update—if you’re brave enough to handle it.
Links
Ned: You don’t need to hear me. I’ll just do all the talking, and you can nod your head sympathetically as if you understand what I’m saying. It’s similar to what my dog does, except she’s usually, like, buried in her own crotch. So, please don’t do that.
Chris: And once again, we can all be very blessed and relieved that this is an audio-only podcast.
Ned: [laugh].
Ned: Hello, alleged human, and welcome to the Chaos Lever podcast. My name is Ned, and I’m definitely not a robot. I’m a real human person who likes to eat food, and go sunbathing, and shower on a weekly basis. That’s right, that’s the correct amount of showering for everyone. And then we all get clean and smell good? Right? With me is Chris, who’s also clean and shiny, and here. Hi Chris.
Chris: Well, I am clean. I can’t really comment on shiny. That seems inaccurate.
Ned: We just need a little bit of foundation and powder, and you won’t be nearly so shiny. It’ll be fine.
Chris: I’m out of bronzer.
Ned: [laugh]. Does bronzer make you less shiny, or just turn you more orange?
Chris: I don’t know. It’s just a fun word.
Ned: It is a fun word. I think we’re showing how little we know about makeup, which is not terribly surprising, if anybody’s watched our video clips in the past.
Chris: Right. If you’ve ever seen anything of us in meat space—
Ned: I know that’s the common parlance, but I don’t like it. I’ve never liked it.
Chris: I mean, I feel like the only time that’s appropriate is if you’re at a butcher shop.
Ned: That is meat space.
Chris: Right.
Ned: There you go. Maybe that’s part of my problem is, like… we’re not just meat. I mean, I’m definitely human, and not made out of metal parts, but like, we’re more than just meat. I’m sticking with it.
Chris: That right there is a self-help book that’s just waiting to be written.
Ned: [laugh]. Oh, Jesus. I can see that in the airport right now. You’re walking through and just, We’re More Than Just Meat 50 weeks on The New York Times bestseller list, which is pretty easy to do because that system is just horribly corrupt.
Chris: Right.
Ned: They have so many subcategories of bestsellers. Like, you can be a bestseller in—
Chris: And it doesn’t even take that many books. It’s like 10,000 books, and you’re on there.
Ned: Oh, really? That’s—
Chris: Yeah.
Ned: I bet I could s—well, all right, I know what I’m doing for the rest of my day. I’m going to have AI write a really terrible self-help book, and I’ll just slap my name on it.
Chris: I think that’s called The Secret.
Ned: [laugh]. Burn [laugh]. Oh, should we talk about some tech garbage?
Chris: Sure.
Ned: I thought we could dig into the world of how PCs actually boot up, through the lens of a recently discovered issue with Secure Boot and UEFI. And if you don’t know—
Chris: What did you call me?
Ned: —what those things are, listener, don’t worry. What’d you say? Eufee?
Chris: What did you call me?
Ned: [laugh]. Eufee. I don’t know how I feel about that one.
Chris: Might be a Pokémon.
Ned: It—oh, God, it might be. All right. So anyway, it was recently discovered that thousands of system boards are running code that could leave them open to compromise, and that code runs at such a low level on the system that even the best EDR and antivirus software in the world can’t catch it. What is this low-level code, what’s wrong with it, and why is it such a problem? That’s what we’re going to cover today. But first, I’m going to ramble about the history of computing for the next 20 minutes, as is tradition.
Chris: Yay.
Ned: Listen, you knew what you signed up for when you subscribed to this podcast [laugh]. This is what we do. I actually had to cut out about a page, page-and-a-half of stuff I wrote because it wasn’t actually relevant to the topic, I just found it really interesting. I was reading about ENIAC and, man, I just fell down a rabbit hole.
Chris: Yeah, that’ll happen.
Ned: Yeah. So, I’m not going to talk about that, or the fact that I read about half of the operational manual for the IBM 704.
Chris: Don’t you have, like, things to do?
Ned: God, no [laugh]. I’ve set up my life in such a way that I can sit and read a manual from 1956 for a system that I have no access to, and enjoy it.
Chris: I mean, I would be more harsh about this situation if I hadn’t spent a non-zero amount of time revisiting the world of the PDP-11 and various ways to emulate it, so…
Ned: Yeah, you don’t really have the high ground to stand on.
Chris: Yeah, let’s just move right along.
Ned: [laugh]. Okay. So, the specific exploit we’re talking about deals with the Secure Boot functionality of modern operating systems that use UEFI BIOS. Or just UEFI. BIOS is kind of a separate thing?
Anyway, Secure Boot relies on a chain of trust using checksums and cryptographic signing that verify the software components being loaded are valid and allowed to run. That includes things like EFI drivers and operating systems. For instance, Windows is signed using a certificate that UEFI recognizes, and so it’s allowed to boot. If you wrote your own operating system—which I don’t recommend, but you can—if you try to load that with Secure Boot enabled, the boot would fail because you didn’t sign your operating system with keys that Secure Boot recognizes.
Chris: This can be both a good thing and a bad thing because talking about some of the ancient computers we were talking about before, people literally did that. They were like, “I want it to do this, so I just wrote an assembly language operating system over my lunch break.”
Ned: Mm-hm. And don’t worry, we will talk about that [laugh].
Chris: [laugh]. But what this helps protect against is people maliciously and violently modern—modernizing is a strong word; modifying is a better word—
Ned: [laugh]. Yes.
Chris: —Windows.
Ned: Windows or Linux, for that matter, because some flavors of Linux are also signed, so UEFI will trust it. Now, where does Secure Boot get its list of trusted keys and certificates, and what verifies that the UEFI firmware hasn’t been tampered with? How do you build a chain of trust that starts from the very moment you push the power button on your computer? Understand that, we need to understand how computers used to boot up and how they do it now. Like I said, I did a lot of reading for this, and I really considered taking this all the way back to World War II and the bomb machines used at Bletchley Park—in part because I just finished reading a book called The Rose Code, which is fantastic, and I recommend reading it—and these are the types of computing devices that predate something like ENIAC or modern silicon-based transistors.
Still, it is interesting to consider how a computer actually gets up and running from a powered-off state. How does it go from nothing, a useless hunk of plastic, metal, and silicon, to running Crisis at 120 frames per second, or, I don’t know, whatever the magical equivalent is today. I haven’t really been paying attention.
Chris: Oh, Ned. Modern computers can’t do that either.
Ned: [laugh]. Okay, good. I don’t know. I was looking at monitors a couple weeks ago, and it had all these specs that I didn’t recognize. I was like, “What the hell is a nit?” I just want it to be able to go 1920 by 1080. Can I just do that? No. Now, I got to know about refresh rates and crap.
Chris: [laugh].
Ned: So, I did fall down about a 30-minute rabbit hole reading about ENIAC—okay, a 90-minute rabbit hole reading about ENIAC—but here’s the long and short of it: the earliest computers were basically calculating machines. ENIAC, for instance, ran ballistics tables for the US during World War II, and the program that it ran was loaded through literally just moving around cables to connect components and toggling switches. There was no operating system outside of the actual people who operated the hardware. So, I guess they were the operating system.
Chris: Right.
Ned: For a long time, there was no common operating system, bootloader, or even a standardized boot process for mainframe computers. The CPU was not this discrete chip that sat on a system board, but an actual large processing unit filled with vacuum tubes that needed to be initialized and prepared by the operators.
Chris: Yeah, and preparing them was a physical operation.
Ned: Yes.
Chris: Which is what the operating system does for you now.
Ned: For instance, the operators had to go and check all the tubes because tubes failed constantly.
Chris: All the time.
Ned: Yes. Fortunately, silicon transistors are slightly more reliable, unless, you know, you’re one of the new Intel processors that asks for the wrong voltage. See our tech news from last week [laugh].
Chris: Nice.
Ned: So, I’m going to skip ahead several decades and gloss over a lot of history because I did have that in here, but then I ended up deleting it because it was not germane to the conversation. But I did think it was neat, so we really do need to do a whole thing on mainframes at some point. Instead, let’s skip ahead to how a PC in the late-’80s, early-’90s would boot up. Side note, you might wonder where the term boot comes from, and it’s really a shortened form of a bootstrap program, or bootstrapping, which is a term that has its roots going to way before computers, and it referred to an impossible or a paradoxical task where one had to pull themselves up by their own bootstraps, which would be a scenario where you’re already wearing your boots, and if you tried to pull yourself up, you can’t because, like, you’re standing on them.
Chris: That’s right, people. It’s paradoxical, not inspirational.
Ned: Yes, but at this point we have—well, the English language has bastardized it to the point that it means inspirational. So—
Chris: Literally.
Ned: Not figuratively. Or both because that doesn’t matter anymore either. [sigh]. God, we’re a mess. Anyway. The paradox of computers is, how does a computer know how to load its first program that can load all subsequent instructions? And the answer to that is to bake it right onto the chip.
On the early models of the Intel 8088 and 8086 chips, there was a predefined instruction that ran when power was applied to the chip. The chip reads a memory location from two registers. The first register is CS, which stands for Code Segment, and the second one is IP, for Instruction Pointer, and those combined together would form the memory location, 0xffff0. I don’t know what that is in binary. Don’t ask me. Actually, I do know what it is in binary because—
Chris: Stop.
Ned: Okay [laugh]. The memory location points to ROM code mapped to that location. The job of the ROM code stored in read-only memory is to load the BIOS for the system. And BIOS stands for Basic Input Output Software and it dates to the CP/M operating system that was created by Digital Research back in the mid-’70s.
Chris: Right. And this is an important distinction from what we talked about just a minute ago. So, old computers in the ’50s—’40s, ’50s—when you turn them on, nothing happened.
Ned: Exactly.
Chris: It just sat there and waited for you to do something, to force an interaction based on electronic flow. The whole point of baking this stuff into the chip is this happens immediately when power is applied, so the program doesn’t have to be loaded, per se. That’s how it solves the paradox. The program is part of the computer.
Ned: Exactly. But you need somewhere to store that initial program, and that’s what the read-only memory is for. It’s that very first thing to run which gets it prepared to run all the other things. IBM, when they were designing their personal PC in the early-’80s, they decided to use the same term BIOS. Even though they didn’t provide the code behind their implementation of BIOS, other vendors reverse engineered the PC BIOS to create their own firmware and motherboards. So, it was never an official, well-defined standard; it was just like, all right, this is what IBM is doing, and we’re all just going to copy them.
Chris: Yeah. And back then, it wasn’t doing a lot because it couldn’t do a lot.
Ned: Yeah.
Chris: It didn’t have the power and it didn’t have the storage space. It had to be this barest of bare-bones operation to just get the thing off the ground.
Ned: Yeah, the amount of storage space allocated for the BIOS code was in the kilobytes range. Like, 200 kilobytes, there’s 400. I forget which one. Very tiny. And the workaround was to then have that code look in another location to load additional code to do more things.
And so, that’s how BIOS kind of ballooned out to be a more fully functioning pre-boot environment. One of the things it did was a power-on self-test during which it would enumerate all of the connected hardware devices, validate its own checksum, and also check the functioning of the CPU and memory. For those of us who have started up a four-socket server with 32 DIMM slots and an array card that can take a while.
Chris: Press the button, go get a coffee.
Ned: Yes, and if—[laugh] God help you, if you need to get into BIOS and you missed the prompt.
Chris: [laugh].
Ned: That’s another 15 minutes of your life you’re not getting back.
Chris: I thought it was F2. I thought it was F2. This one was Delete.
Ned: The fact that we never standardized on a button to hit to get into BIOS is one of the most frustrating things about servers.
Chris: “Dear Diary, not all keyboards have F11.”
Ned: Couldn’t we just pick, like, one key? Like, just make it the Delete key, please. Goddamn it. It’s not even consistent within the same vendor.
Chris: [laugh].
Ned: Sorry [laugh]. Back on topic. The last thing BIOS needs to do is try and load an operating system. So, it will pull a list of boot devices, along with other configuration information, from CMOS, also known as non-volatile RAM, and it will try each device in series until it finds one that has a valid bootloader program stored on it. That bootloader needs to be located at a specific area of the device so the BIOS can find it successfully. And that could be at LBA 0, LBA 17 if it’s a CD-ROM, or a master boot record if it’s a hard drive. Don’t ask me what LBA stands for because I don’t remember. The boot sector on the device boots the bootloader, which in turn loads your operating system.
Chris: And incidentally, if you’ve ever tried to format, say, a USB stick to boot off of, and you don’t click the little checkbox that says, make this device bootable—
Ned: Uh-huh.
Chris: This is why you have a fully-fleshed out and complete—from a data and bits perspective—operating system on a USB that will do absolutely nothing.
Ned: Right. Because when the system starts, it needs to find that boot sector and a bootloader in that boot sector to do anything. Now, you might imagine that a boot process that was devised back in 1982 would have some serious shortcomings after 40 years, and you’d be right. As a replacement for traditional BIOS and the standard power-up process, we had the introduction of UEFI and management engines like Intel ME. UEFI was meant to remove some of the unnecessary legacy cruft of BIOS and establish a formal cross-architecture standard for boot ROMs.
Where the BIOS was really just for x86 CPUs… we have more than that now. A lot more. So, the idea was, UEFI should be able to deal with 64-bit CPUs—hey, those are a thing—as well as ARM, and I guess RISC-V, and other architecture formats. It’s also a formal standard as opposed to BIOS, which was just reverse-engineered, and it added a thing called Secure Boot, as well as support for larger disk partitions, and a more robust pre-OS boot environment.
Chris: Why I need a graphical user interface for a boot environment, I still don’t understand, but here we are.
Ned: Indeed. Management engines like Intel ME or a BMC—Baseboard Management Controller—on a server are a totally separate system that lives outside of the primary system board and boot process. They can watch over the boot process, and also supply attestation or auditing. So, a modern system boots up something like this: you press the power button—that’s important—
Chris: Great start.
Ned: Yes. And then the Platform Controller Hub, which would be something like the Intel Management Engine, starts up before the CPU. CPU is still not—has no power applied to it. The PCH does some stuff, and then it issues a reset to the CPU to start its boot process. The CPU loads the UEFI firmware from the serial port interface flash storage, the firmware accesses the boot sector on the configured boot device, and from that sector, the bootloader is loaded into memory, and control is handed over from the UEFI process over to the bootloader, and the bootloader actually loads the operating system image from the machine. That’s about six steps.
There’s a lot of stuff happening in here, which invites the possibility for shenanigans of the malicious type. That’s where Boot Guard and UEFI Secure Boot come in. The goal is to build a chain of trust from the hardware all the way up to the operating system. Each step in the process checks and verifies that the next step in the process is valid, through a system of checksums and cryptographically signed keys, which gets us back to where we started. So, let’s walk through the process again, but this time with an eye to signed software.
Sticking with Intel, they have a private key housed internally somewhere in Intel, and they use that private key to sign their Intel ME software, and the public half of that key is stored on the chip die. They burn it onto the chip die. It cannot be changed. Intel ME loads one or more Authenticated Code Modules, or ACMs, from the Firmware Interface Table. Each of those ACMs is signed by the Original Equipment Manufacturer, or OEM, who is producing the system based on Intel’s chips.
When Intel ships their chips in manufacturing mode to the OEM, the OEM uses special software to burn their public key into the CPU using field programmable fuses. Those fuses are permanent and unchangeable. Once it gets burned in, you can’t remove it. Which is good, right? Stored in hardware now. Bad if, you know, somehow that OEM loses their private key, but that’s a story for another time.
Chris: I feel like you’re getting ahead of yourself.
Ned: Yes, indeed. That means that only firmware signed by the OEM or Intel is able to run in this pre-UEFI environment. We’re now up to the point where the UEFI firmware is being loaded, and that firmware is also cryptographically signed and verified using the OEM keys. Secure Boot in UEFI also has a set of cryptographic keys it uses to verify aspects of the boot process. The platform key is the root of the chain, and it helps to verify all the other keys involved with UEFI.
Key Exchange Keys are the set of keys trusted by the firmware, and they can come from the OEM or third-party vendors. There’s also two databases that form Secure Boot: Authorized and Forbidden, and each of those contains a list of hashes or certificates for allowed or forbidden images that can run on the system. What’s weird is UEFI is trusted—the firmware itself—is trusted because it’s signed, but then inside of UEFI, its Root of Trust starts with this platform key, and that platform key is not necessarily burned in anywhere; it’s just part of the UEFI firmware package. Now, we trust it because the firmware package was signed by the OEM, but the actual platform key itself, that’s the start of a new chain, and that’s kind of important. In addition to launching the bootloader, the UEFI firmware can also load what are called EFI drivers, and those drivers can include malicious code.
However, they’re also signed, which should prevent such an occurrence. Should. The last thing in this chain is the bootloader, which needs to be signed with a Key Exchange Key that is trusted by the system. Windows, Ubuntu, Red Hat, they all have signed bootloaders. So, in theory, the combination of Boot Guard and Secure Boot should mean that every single step of the boot process is validated and verified by the previous step. So, what could possibly go wrong?
Chris: Famous question with a lot of answers, is my guess.
Ned: [laugh]. And none of them are great. So, if the private key used by any of the OEMs is leaked, that OEM is screwed because the OEM keys for their ACMs that’s loaded by the Intel ME engine, those are burned onto the chip with those chip fuses, so short of replacing the physical chip that’s shipped that way from the OEM, there’s no fix for losing the private key. That would be real bad. We’re talking RMAs for every single system board that they shipped that has that key burned into it. So, that’s not great.
Intel’s keys could also be compromised with similar disasters because they burn their public key into the chip die. That is a great way to make sure that it never gets altered maliciously, but it’s also a great way to make sure it cannot be changed if the private key gets lost somehow. Yep, the operating system has no idea about [laugh] any of this. As I pointed out, the UEFI firmware has its own root key that’s called the platform key. If the private half of that platform key was compromised, that would be super bad, too. Malicious actors could add their own Key Exchange Keys into the UEFI firmware and sign their malware to be trusted. And that’s exactly what security researchers from Binarly discovered.
So, now we’re getting into the actual exploits. To understand how this all happened, we first have to talk a little bit about the independent BIOS vendor market. If you’ve ever watched a computer boot up, you’ve probably seen the logo of one of two BIOS vendors. There are some other ones out there, but the two biggest ones that you probably have seen are American Megatrends or Phoenix Technologies. There are a handful of other companies, but these two, they really roll the roost. I’m sure you familiar with at least one of those two, Chris, from your non-Mac interactions.
Chris: Well, I mean, I will say this for the record: one has a better name than the other one.
Ned: Yes, it does. Megatrends is awesome. Right? That’s what you were—[laugh]. Original equipment manufacturers that develop motherboards are probably getting their BIOS hardware from AMI or Phoenix. Along with that hardware is reference code for the vendor to use when they’re developing their own firmware.
The OEMs then sell their system boards to device manufacturers like Dell, HPE, Samsung, et cetera, et cetera. The reference code here seems to be the real problem. It appears that a non-zero number of OEMs are simply taking that reference code and implementing it as is. It’s not what it’s there for, but that seems to be what they’re doing, or at least close. And that includes the not-for-production platform key included in the code that is clearly labeled as ‘do not trust’ or ‘do not ship.’ That’s the common name in the certificate.
Chris: I just wish that it would tell you what they wanted you to do.
Ned: I know, right? Don’t be coy. But that’s still just the public half of the platform key. What about the accompanying private key? Well, unfortunately, that private key is included in the reference software, and it was leaked on a GitHub repository in 2022.
Chris: [sigh].
Ned: No.
Chris: [sigh].
Ned: I know. While we don’t know the exact details of how it was leaked, it’s probably that the developer in question didn’t realize the key was there, or they didn’t mean to set the repository to public, or they didn’t think anyone would leave the platform key marked ‘do not trust’ in their UEFI firmware. They were very, very wrong about that last one.
Chris: Oh, you sweet summer child.
Ned: Binarly has conducted a survey of tens of thousands of UEFI firmware images, and they have found that more than 10% of them contain an untrusted platform key. That leaves all of them vulnerable to a rootkit like BlackLotus. You can find the full list on their blog, and a tool to check your system. So, how bad is it? The reason this is so bad is that it has the capability to completely avoid typical virus and malware detection and remediation.
Rootkits—or bootkits, depending on how you want to call them—those get loaded before the operating system, and so they can segment themselves from the operating system’s knowledge entirely. And they’re super persistent. Even if you reinstall your operating system, throw out your SSD and get a new one, or even update your firmware, there is the possibility it will persist. Your only real option is to burn it with fire, assuming you even know that the rootkit is there.
Chris: Right. This is the problem, with two parts of this. One is the simplicity of the environment does not have a lot of checks or counter-checks, and by not a lot, I mean none, except for that cryptographic keychain.
Ned: Right.
Chris: So therefore, as we just alluded to, the cryptographic keychain can’t be trusted. Literally nothing else can be trusted, either.
Ned: Exactly. And because this firmware is not open-source in any way—it’s all closed system—you, as the owner of your system, don’t really have access to inspect this kind of thing, at least not easily. Can this be fixed? Yes. Your vendor can update the UEFI firmware to remove the untrusted platform key. That’s probably the best approach if you’re impacted.
Use the tool that Binarly published, see if you’re impacted, then go check the website of wherever you bought your computer from and find out if they have a firmware update that fixes it. If they don’t, you can technically change the platform key yourself—it is programmable, after all—but here there be dragons. I probably wouldn’t unless I had to. So, in case you didn’t have enough to worry about today, congratulations, I’ve given you something else. Man, it feels good to give, doesn’t it?
Chris: You are a helpy helper.
Ned: The helpiest. I included so many links in this, so many rabbit holes to go down. If you’re interested, it’s all included in the [show notes 00:29:50]. But that’s going to do it.
Thanks for listening, or something. I guess you found it worthwhile enough if you made it all the way to the end, so congratulations to you, friend, you accomplished something today. You can go sit on the couch, fire up your computer, and install your own rootkit. You’ve earned it. BlackLotus is great. You can find more about this show by visiting our LinkedIn page, just search ‘Chaos Lever,’ or go to our website, chaoslever.com where you’ll find show notes, blog posts, and general tomfoolery. We’ll be back next week to see what fresh hell is upon us. Ta-ta for now.