Negligence as a Service | Chaos Lever

Welcome back, fellow humans (and bots in disguise)! This week on Chaos Lever, Chris and Ned dive into the dusty archives and slap us with a two-by-four of cybersecurity déjà vu. We’re talking legendary hacks that should have taught us better—and yet, here we are. From Emacs-enabled espionage in 1986 to Equifax’s honor-system security policies, it's a masterclass in how not to protect your data.
🧠 Lessons? Sure. But mostly it's about how we never learn them. We dissect what really caused these breaches—not slick zero-days, but plain old negligence and a fondness for not patching things. Also featured: expired SSL certs, trust as a security model, and how managing your asset inventory is more crucial than ever.
💥 Oh, and Ned tried to do a handstand for a cloud video and bled. Not relevant to cybersecurity, but 100% relevant to the Chaos Lever experience. Stick around for reenactments, rants, and ruminations on how saying “I accept the risk” is not a security policy.
🔗 LINKS
Apache Struts bug:
https://blog.talosintelligence.com/apache-0-day-exploited/
Nova episode about the 1986 hack:
https://archive.org/details/The_KGB_The_Computer_and_Me_1990
Senate investigation into Equifax:
https://www.hsgac.senate.gov/wp-content/uploads/imo/media/doc/FINAL%20Equifax%20Report.pdf
CVE system creation by MITRE:
https://www.cve.org/Resources/General/Towards-a-Common-Enumeration-of-Vulnerabilities.pdf
00:00 - Legal Advice from People Who Aren’t Lawyers
05:00 - A Tale of Two Hacks (Guess That Breach!)
10:30 - Hack #1: 1986, Emacs, and the KGB
18:00 - Hack #2: Equifax and Eternal SSLs
26:00 - CVE, CVSS, and the “Just Patch It” Philosophy
34:00 - Security as a Cost Center vs. Long-Term Strategy
39:00 - Exit Rants & Horse DUIs
[00:00:00.16]
Chris: How many lyrics can we do before we get sued?
[00:00:03.27]
Ned: We've been sued for years. I just refuse to check that inbox.
[00:00:07.25]
Chris: That's true. If you don't open the email, technically, you're not going to prison.
[00:00:12.07]
Ned: It's just like being served with a subpoena. As long as you can avoid the person who's serving it to you, it means you didn't know. And ignorance is always a great defense in the court.
[00:00:23.06]
Chris: It's also bliss.
[00:00:25.24]
Ned: So I can blissfully stay out of court is what you're saying.
[00:00:30.24]
Chris: I don't think anybody has ever been blissful in court.
[00:00:35.02]
Ned: I mean, I've been set adrift on memory bliss, but not in a courtroom.
[00:00:40.09]
Chris: Is that like Kray Tom? Is that something you buy at a gas station?
[00:00:42.26]
Ned: No, it's more like It makes you feel strong. No. That stuff is scary. I am very suspicious of any giant jug of powder that you're meant to mix with water and just drink.
[00:00:59.26]
Chris: Yeah, I mean, that's why I stick with crocodile. It's natural.
[00:01:04.26]
Ned: You eat crocodiles?
[00:01:06.28]
Chris: You don't know what that is, do you? Just do the intro.
[00:01:11.06]
Ned: Is this like... What was that supplement that's supposed to be the only thing you need? Hume or something?
[00:01:18.19]
Chris: Cocaine, I think, is what you're thinking of.
[00:01:21.00]
Ned: It's the one I was pronouncing it wrong. Hello, alleged human, and welcome to the Chaos Lever podcast. My name is Ned, and I'm definitely not a robot. I am a real human person who requires sustenance of a non-liquid variety most of the time. Although lead acid seems delicious to me, with me is Chris, who is also delicious to me. I mean, here. Hi, Chris.
[00:01:55.11]
Chris: And this, dear listeners, is why we record from a distance.
[00:02:00.14]
Ned: It's so lifelike. I could just reach out and eat you. I mean, touch you.
[00:02:04.29]
Chris: You're making it weird.
[00:02:07.02]
Ned: I booped you on the nose.
[00:02:10.16]
Chris: Did you feel it? I saw that, and I decided not to interact with it.
[00:02:16.10]
Ned: Yeah. I mean, it's better than patting you on the head. I feel like that's beneath you or above you. Yeah.
[00:02:24.24]
Chris: Unless I'm doing a handstand, in which case a lot of things have gone wrong.
[00:02:29.22]
Ned: When's When was the last time that you did a handstand on purpose or by accident?
[00:02:33.26]
Chris: I'm honestly going to go with never. You all are lucky that I can stay upright on the feet things. It's fair. Doing that with the hand things? Insanity.
[00:02:47.09]
Ned: Matt, because I'm an idiot, I will just preface it with that. And the story. In conclusion, I was trying to do a sponsored video, and the premise was that this thing turns cloud pricing on its head. So I thought I would shoot the beginning where I was delivering it, doing a handstand or headstand against a wall.
[00:03:15.16]
Chris: And when actually did they release you from the hospital?
[00:03:17.29]
Ned: I ended up bleeding. But not why you would think. It turns out my toenails are just too sharp.
[00:03:27.13]
Chris: I don't even need more information. We can just move on.
[00:03:30.29]
Ned: I don't want to give more details because I feel like ignorance is in fact a bliss, as we discussed.
[00:03:37.08]
Chris: Beautiful.
[00:03:37.13]
Ned: Thank you. Speaking of ignorance.
[00:03:43.11]
Chris: So I was trying to figure out a lot of things. That's just basically my life. Wait, stay on topic. Okay. I was trying to figure out a topic title because I had this idea, and I wrote the idea, and I got everything except for the beginning, which I feel like is an important part.
[00:04:03.24]
Ned: It's where you start.
[00:04:06.27]
Chris: I think what we're going to call it, though, is back to the basics, 2025/2017. I mean, 1986 edition.
[00:04:17.02]
Ned: When you say basics, this is just crap that you should have been doing this whole time, but people keep forgetting.
[00:04:22.19]
Chris: Well, I mean, you're spoiling the ending, but yes.
[00:04:27.19]
Ned: I think people can read between the lines.
[00:04:31.21]
Chris: They listen to podcasts so they don't have to read.
[00:04:36.07]
Ned: Oh, fair point. You fool. They're mostly here to point and laugh. That's fair, too. We should let them indulge in some schadenfreund. No one can say it right, so it doesn't matter.
[00:04:49.17]
Chris: I take your accent and raise you an umlau. I don't even know what language we're speaking anymore.
[00:05:00.02]
Ned: I don't know either.
[00:05:01.14]
Chris: Anyway, let's start by I want to give you a tale of two different hacks. Okay. Stop me if these seem familiar. They're both famous, but I want to generalize, and I want to be a little bit vague at first. Just to give you the gist of what we're talking about. I promise we're going to dig into both of them and see how much was similar and how much maybe things have not changed.
[00:05:31.21]
Ned: All right.
[00:05:34.19]
Chris: Real quick short intro. Hack number one. A systems administrator working at a university was tasked with identifying weird excentricities that the accounting systems were sharing as results. This was in a complex accounting system that was intended to keep track of every single user and all of their system usage had been in place for ages and had never shown a problem. There had It's never been a mismatch. The system's administrator eventually realized that someone had broken into the system using various evasion techniques, bugs, et cetera, that we're going to talk about, that allowed the attacker to get superuser access through things like file editors, scheduling systems, persistent backdoors, and then went ahead and deleted the evidence that would have shown he was ever on the systems in the first place. Now, in this case, the attackers were looking for compromising data, military secrets, et cetera, which could then be sold to adversarial countries. The bugs were eventually discovered, as was the whole thing, because otherwise I wouldn't be telling you about it. It was in particularly highlighted by one of the investigators that, It wasn't really about the flaws in the computers. It was more that systems left their front doors wide open.
[00:07:01.22]
Ned: Okay.
[00:07:02.15]
Chris: That's hack number one. Hack number two. Attackers found, after a period of long-term surveillance, a publicly facing website for Company X with a known significant and exploitable bug. In fact, the bug had been known about publicly for two full months at the time. The bug was then, in fact, exploited, allowing the attackers unfettered access to internal systems. This unfettered access lasted for a staggering 76 days, during which time the attackers exfiltrated massive amounts of sensitive, personally identifiable information on, let's just call it the population at large. Later reporting found that the company in question had known about the bug, but had never patched it. Once inside the systems, the attackers were able to move laterally through just a calamity of errors, poorly secured networks, installing reverse shells that gave them persistent access, working to delete any evidence that they were there in the first place. Adding insult to injury to injury to insult to injury, further investigation found that the company in question, quote, learned of significant cybersecurity deficiencies in their whole organization two full years before the hack I'm talking about right now. They had done nothing about, quote, more than 1,000 externally facing vulnerabilities rated as critical, high, or medium, unquote.
[00:08:41.20]
Chris: So first question, holy shit. Second question, do you recognize, just from this information, either one of these hacks?
[00:08:52.29]
Ned: I don't. And what I'm going to say is, in both cases, this is such a common set of circumstances. The way that you've stripped some of the identifying information out, this could apply to so many different hacking cases that have occurred over the last 40 years.
[00:09:12.16]
Chris: You're 100% right on both of those points. I I took these two out of a lot because when I make it clear to you what they are, they're super famous. But you're absolutely right. I could have used a hundred different other examples. Let's go ahead and go into these with a little bit a little bit more detail.
[00:09:31.02]
Ned: Okay.
[00:09:31.29]
Chris: Now, hack one. This is the one that happened in 1986, where a university was breached and the break-in was a single entry point that was used to try to Steal State Secrets. This was the Lawrence, Berkeley National Laboratory, which was where it was discovered. But long term investigation, it turned out that this hack included government systems in a number of places, including the CIA and a little company called MITRE that you might have heard of. Now, if you remember, you got to remember the timeline. If you remember our history on Computer Series. 1986, computers had only been connected to each other for 15 minutes at this point.
[00:10:21.25]
Ned: Yes. This is before the World Wide Web. This is before internet was really a thing. So effectively, most systems were dial up, though a few had dedicated leased lines that were connecting each other.
[00:10:34.19]
Chris: Correct. The thing about it was, things that were running on these computers, they were installed and running on the theory of, We don't need to do security because nobody can get to it. Even if they could, no one would try to attack anything. This is a university.
[00:10:51.03]
Ned: Yeah, I mean, the core assumption there was a lot of these systems were put in place and applications written with the assumption that you needed physical access to the or to a terminal connected directly to the machine to access any of this information. If there was any security, it was physical security rather than any network-based security.
[00:11:14.05]
Chris: If you want more information about that, in our history of computer stuff, we talked about it. We talked about why SMTP was such a nightmare security-wise because it never had any of those considerations. Of course, we talked in-depth about these hacking techniques when we went through the documentary Hackers.
[00:11:30.10]
Ned: Yes.
[00:11:31.17]
Chris: But this particular case, there is a pretty wild nova episode that was recorded in 1990, so not that long after the actual events happened. The name of it is the KGB, the Computer, and Me. What's wild about it is the actual people involved reenact a lot of the scenes from the investigation.
[00:11:55.23]
Ned: That's awesome.
[00:11:57.17]
Chris: And you might think, these are not actors. This is going to be terrible. It's in fact, awesome.
[00:12:04.15]
Ned: I love it.
[00:12:05.23]
Chris: The whole situation was also documented in a pretty famous at the time, and if you're deep into computer nerdery, you should have read it already, called The Coo-Coo's Egg. The book itself is an interesting story. It's more about the investigation of the cybercrime than it is the cybersecurity parts of it. But it was perhaps the first time that something like this was ever recorded, categorized, and made public. You don't need to read the book unless you're on into that thing, but I do recommend the Nova episode. It's a little dated, and it's tough to find a good copy. The one I found online was 480p. There might be a better one. That was the one I found. But I do promise you that if nothing else, this documentary includes some of the angriest yo-yo that I have ever seen.
[00:12:59.15]
Ned: No further questions. I want to find out for myself.
[00:13:03.29]
Chris: A couple more details about hack number one. As we hinted at above, the hack involved the problem of inherent trust. You just assume nobody was going to mess with it, so you didn't try to secure it. The absolute opposite of zero trust. How did they do it? Gnu, Emax, Gnu or Gnu, I don't know what people say about it. Let's just say Emax.
[00:13:27.22]
Ned: Schadenfreund.
[00:13:28.06]
Chris: Gnu existed at the time. Was and is an editing software for text files in Unix. Some purists will say it's the only editing software for text files in Unix.
[00:13:43.20]
Ned: It's not, but okay.
[00:13:45.15]
Chris: Ned's going to start another fight on LinkedIn. The version of Emacs available in 1986 allowed users to copy a file anywhere in the system without it asking you for permission. Now, this is problematic because certain directories have inherent permissions. The hackers eventually realized this and rewrote a system command that was used to schedule maintenance jobs. It was an executable, is a script executable or something along those lines to give their, the hacker's regular logins, superuser permissions. Since this was a scheduled maintenance job, all they had to do was wait five minutes. Their cracked version of that program, which they copied in using Emacs, executed, and boom, root access. That's not often I say it was really that simple, but it was really that simple.
[00:14:37.14]
Ned: Yeah, that's a wild thing that Emacs was able to just write an arbitrary file wherever you told it. Right.
[00:14:46.15]
Chris: The problem with it was it wasn't running in a set UID situation. It was running with root permissions, the actual editor itself.
[00:14:55.22]
Ned: I was going to say, because normally when you launch a program, it only has the same permissions that you do.
[00:15:01.05]
Chris: Correct. This is what we in the business call a learning opportunity.
[00:15:05.28]
Ned: I think things were probably learned.
[00:15:10.01]
Chris: Now, the Emacs issue is the only one I want to highlight. There were more because like I said, there were a bunch of systems involved in this that the hackers were able to find. Other applications had similar problems. The thing about a lot of them was that even back then, in 1986, people recognized issues and issued patches. The administrators on the other infected systems had not applied those patches. Had they actually done that, a lot of this data that they pulled out of the systems, it wouldn't have been possible. Just keep that one in the back of your head.
[00:15:48.04]
Ned: Yeah. All right. Patching is important.
[00:15:50.05]
Chris: There were other issues. The fact that the systems just inherently trusted each other for connections meant that a login was not questioned nearly as much when it came from a system that was inside of the arbitrary zone of trust. Some of the systems were connected via modems, like you alluded to. The telephone lines at the time worked with manual switching in the call centers. Which meant that if you tried to trace a phone call, you would literally have to have an engineer go trace the call by hand.
[00:16:23.22]
Ned: Wow.
[00:16:24.13]
Chris: That was not a euphemism. There's a reason they called it that. Also at the time, and this is actually still possible if you seriously miss configure your computer, if a user deleted a password in the password file for whatever, let's just say I log in and delete your password, then I could just go log in, remotely with the username Ned and no password at all. I don't even need to crack your password. I can just delete it.
[00:16:54.24]
Ned: Okay. That seems bad.
[00:16:58.04]
Chris: The attackers, they were creative. They also took advantage of the fact that back then you didn't have salted passwords, so they could generate some dictionary attacks, et cetera. I mean, it's not bad for 1986. There was a lot wrong. I think at this point, that's fairly obvious. Again, everything had to do with the computers being designed to do whatever it was they did. They were designed for work. They were not designed for security. Many of these things, the password thing, non-hashing, not a thing anymore, logging in without a password, not being allowed by default, etc, all that stuff, they ended up being fixed. Like I said, it was a learning opportunity. Some people have argued that this incident, and especially the book popularizing it to the extent that it did, kickstarted the modern method of standardized vulnerability tracking. Because this is the US government we don't stand for being made foules of.
[00:17:56.17]
Ned: Don't watch the news. Yeah, we'll go with that. Okay.
[00:18:00.25]
Chris: So that's hack one. Hack number two. And I'm actually, I'm surprised you didn't get this from the 76 days thing. It was Equifax.
[00:18:11.20]
Ned: I remember when the Equifax hack happened? Didn't they lose the Social Security numbers of something like 200 million people?
[00:18:23.21]
Chris: 145 and a half was the official number.
[00:18:26.14]
Ned: Good Lord. I mean, that is basically half of the US population.
[00:18:30.18]
Chris: Pretty much.
[00:18:31.22]
Ned: And we all got free 30 days credit auditing.
[00:18:35.13]
Chris: We also got $14, Ned.
[00:18:40.07]
Ned: I know what I spent my $14 on.
[00:18:43.21]
Chris: The sad thing about this is for the size of this company, it is actually inexcusable that this happened. For any company, it's inexcusable. But for a company that's that big and is that important and is that too big to fail, come on. The thing about it is, in 2015, they actually did try to be responsible. They did an internal audit that highlighted the extent of their security issues, set up a roadmap, et cetera, determined that they had a, Lack of complete inventory of the company's IT assets, and, Current patch and configuration management controls are not adequately designed. Those were some of the nicer things that their report said. So that's bad. You know what's worse?
[00:19:33.24]
Ned: What's worse?
[00:19:34.16]
Chris: Two years later, they had done nothing to follow up that report.
[00:19:38.01]
Ned: That is worse.
[00:19:39.22]
Chris: They still had no formalized method of validating patch management. They were basically using, and this is literally what the internal auditors called it, they were basically using the honor system to confirm that patches were being deployed. This would basically be some random web team would tell security that system X was patched. And then system would be scanned by the security team. But guess what happens if you don't tell the security team about system X?
[00:20:13.05]
Ned: They don't know to scan for it? Correct. That is a problem.
[00:20:19.02]
Chris: So this all led to them being in the center of a storm starting in March of 2017. There was an issue with Apache Struts, which is part of a website framework. It's not important what it does. What's important is it's public-facing. What else is important is that a flaw was noticed and it was publicized. Nist let everyone on Earth, including Equifax's cyber security team, know all about it. This particular bug had a CVSS score of 10, which is the worst possible score. It was It's also marked almost immediately as known exploitable in the wild.
[00:21:05.07]
Ned: That's even worse.
[00:21:07.05]
Chris: Yeah. We've got publicly facing, we've got CVSS 10, and we have known exploitable. This is March. Yes. So according to the patch management policy that existed at Equifax at the time, the one that we said was optional, they should have patched every affected system within 48 hours. What they did instead, and this is, I feel like it's variations on a theme, was nothing. The patch was only fully applied in August, which was already far too late.
[00:21:44.09]
Ned: Yes.
[00:21:46.19]
Chris: The reason for this was because, quote, Equifax was unable to detect a vulnerable version of Apache struts on the system due to lack of a comprehensive IT asset inventory, unquote. The company had been breached on May 13th of 2017, two months after the vulnerability was announced.
[00:22:12.18]
Ned: So had they actually followed their own internal patching guidelines, they would have been protected when May rolled around and someone attempted the hack.
[00:22:21.28]
Chris: Correct.
[00:22:23.29]
Ned: But they didn't do that.
[00:22:25.08]
Chris: Making it worse, worse, worse, worse, worse. I know how were we even at this point. The breach, which, like I said, happened two months after the patch was released and not deployed, the breach was not detected for another 76 days because the only reason they did find it was due to an SSL certificate being replaced. That certificate had expired in November of the previous year. They finally updated the CERT. They paid more attention to the system. They noticed the issues, and they were able to trace C2 traffic going back to China. Wow. If they had not taken the time to update that certificate, they might not have noticed this issue at all.
[00:23:16.18]
Ned: How do you not notice that a certificate has been expired for nine months?
[00:23:23.01]
Chris: You're asking some great questions.
[00:23:24.21]
Ned: That should break the functionality of that site.
[00:23:29.03]
Chris: Well, this This is an interesting point. Back then, you would still get the alert, this certificate is expired, but you could just click okay.
[00:23:37.12]
Ned: You could. I mean, yes.
[00:23:38.24]
Chris: Which I suspect is what happened. These days, you can still do it, but the browser makes it a lot harder.
[00:23:44.29]
Ned: As it should.
[00:23:45.28]
Chris: Yeah. This is one of those issues... We could talk about this in a different episode. Sometimes security is in there to help you help yourself. Help me help you help hands across the world. I don't know where we're going with this. Anyway, that's the long and short of it for Equifax. If you're curious, there are actually a multitude of final reports on the issue, considering, again, the size and scale and the fact that half of America was affected. We'll include a link to one that was done by a subcommittee of the US Senate in the show notes, which is where I got some of the more damning quotes. It's 100 something pages long. You don't have to read the whole thing. Definitely read the executive summary because damn.
[00:24:33.29]
Ned: Nice.
[00:24:36.29]
Chris: What is the point of this? Where are we at now? The problem, again, is this could have been any company. Equifax is by far the most famous example. We could have picked Target. We could have picked, I don't know, Pick Your Favorite.
[00:24:55.15]
Ned: Microsoft.
[00:24:56.15]
Chris: That's a good one. Companies have We have and have had for years access to all of this information about vulnerabilities, and far too often do exactly what Equifax did, which is nothing. But let's take a minute to pause and talk exactly about what it is we're talking about when we say vulnerabilities in this way. There's a bunch of systems that are involved to give us the information that we know. The CVE system is actually a really, really good idea, and you would be surprised how many people have input for it. The system was created in September of 1999 by MITRE with the express goal of creating a standard for which to allow companies and government agencies to share information and define, categorize, and track vulnerabilities over time. One of the biggest reasons for this is to say if security teams at Cisco, Sentinel One, Microsoft all find the same bug, it only gets referenced in one place one time. That's just a simplicity thing.
[00:26:05.08]
Ned: If you think you found a new bug, you can go check the CVEs, and if somebody's already discovered it, then you can just say, Hey, I found this, too.
[00:26:13.03]
Chris: The other thing is that encourages all of these otherwise companies that would be considered competitors to work together on a common goal, which is don't let hackers break into anything.
[00:26:24.20]
Ned: Right.
[00:26:25.11]
Chris: Seems like a pretty decent common goal.
[00:26:27.25]
Ned: Yeah, I'm okay with it.
[00:26:30.06]
Chris: So CVE system, September 1999. In case anybody was curious, there isn't really such a thing as a first CVE because when they released the system, all 321 that they had were released at the same time. That's not fun.
[00:26:46.27]
Ned: No, there should be a first.
[00:26:48.10]
Chris: Right. Then release the other 320. Put some joy into your life, Miter. Okay, so that's CVE. Okay. Cvss was added to the mix in 2005. The point of that is to tell users how bad the CVE is. What is the potential for damage? That's the score from 1-10 that we talked about before. Other scores have been worked on as well, publicly available ones in particular. If you caught the T-Now show this week, you'll know about known exploited vulnerabilities or the Kev list, which tells you if an exploit has actually been a observed in the wild.
[00:27:32.08]
Ned: Because you can have a vulnerability discovered and listed as a CVE, and it can have a very high score, but it could be almost impossible to actually effectively exploit it. That might mean that it's not critical for you to patch it because it's on maybe an internal system that's tightly controlled or just no one's actually written a way to take advantage of it yet. But if someone did, then it could be a major problem.
[00:28:01.11]
Chris: Then there's another one called likely exploited, which if we don't have evidence that it was exploited, but there's a chance based on statistics, it'll tilt the scale a little bit. Then, of course, companies, if you have a vulnerability management system, you can mark your servers as tier one, tier two, tier three, tier four, balance the scale to the things that are the most essential, et cetera. But it brings us back to the main point, which is what difference does any of it make at all if the information isn't used. As an industry, security folks talk a lot about vulnerabilities and all of the things above. I think one of the reasons is that they're really easy to discuss and be alarmed about, especially the fun ones that have CVS scores of 9-10. But the vast majority of issues that actually get people in trouble and get companies breached are caused by not zero-day attacks. They are caused by human error. Human error in the categories of things like phishing attacks or otherwise giving up your credentials to bad actors, miss configurations of security systems or features. Who all remembers the S3 bucket Wall of Shame, where people would set up cloud storage that was just 100% world-readable for anyone?
[00:29:29.28]
Chris: From anywhere?
[00:29:31.11]
Ned: Yeah, it's staggering how many applications, services, pieces of software have completely open defaults. Right. And people just accept those open defaults and don't try to, I don't know, change them in any way. I know this was the case for some different databases. I feel like Postgres. Early Postgres was an example of that. If you just spun one up, the username and password was like root root, and it uses a standard port. People would stand up Postgres databases, open to the internet, and then would immediately get breached because they didn't change any of the defaults.
[00:30:15.20]
Chris: Oracle Tiger is another one. Or Scott. The username was Scott and the password was Tiger. That was in place, I think, until Oracle 8.
[00:30:25.14]
Ned: And let's not forget all the wireless routers out there that still have a blank username and a password of admin. Wingsis.
[00:30:35.29]
Chris: I do think that was a funny thing, too, because we saw the Verizon's and the Comcasts of the world stop doing that. And instead of that, you would have a unique password stapled to the bottom of the router?
[00:30:48.19]
Ned: Yes.
[00:30:48.28]
Chris: Which is 12 characters, long numbers and letters, which is, admittedly, more secure. Unless somebody goes over and just flips over your router and takes a fucking picture of That's true.
[00:31:02.23]
Ned: But if you have physical access to the router, it's already game over. I get that. It was better than what they were doing before, which is nothing.
[00:31:13.11]
Chris: The argument that I would make is a lot of what we do or what we need to do to be responsible in security is not fun and not sexy and doesn't have things like a CVSS score of 10. It really is the grunt work, the boring stuff, setting up policy and procedure, and then, most importantly, making sure that it's being followed. Inventory management, I would argue, is a crucial part of security. Things like ITSM systems and configuration management databases You say that in a sentence and everyone falls the fuck asleep, which is fair. It's very boring, but it's very necessary. If you have a system that is not a record in your inventory system, you need to make sure that it doesn't get on your network, period. If it is in the system, then it needs to be tracked and scanned and updated and patched regularly. If that doesn't happen, then it needs to be knocked off the network, period.
[00:32:15.27]
Ned: Yeah. There is the challenge of how ephemeral services and application components are today. Whether you're using a cloud service where anybody... Not anybody, but you can spin up a bunch of lambda functions, and you can create a whole bunch of containers that run on demand, or you're running on-prem, a Kubernetes cluster that's spalling workloads and tearing them down on a regular basis. It does make inventorying a fraught process? Because do you inventory a container that has the average lifetime of 45 seconds? Probably not.
[00:32:54.11]
Chris: Yeah, I mean, I would still argue that A, that's an edge case, and B, you can do things like small scale micro segmentation that gives you a significant amount of security for something that doesn't exist that long. But I also think that that's 10% of the world that has to deal with those things or working at that level, that much of a bleeding edge when it comes to their infrastructure.
[00:33:16.17]
Ned: I just like to point out that what actually ends up in your inventory is also a decision you need to make, is where do I need to draw the security boundaries and be aware of what's inside and outside that boundary? Right.
[00:33:32.19]
Chris: Honestly, it all needs to be considered in the context of risk management. The risk of what you're doing and what you're going to do about that risk. Getting hit by a zero day on day zero is an extraordinarily low risk. But if you let that system sit unpatched for, I don't know, two months, hypothetically, in 2017, All of a sudden, that risk is not zero. It would be a non-issue if it was patched in 48 hours. We would never have talked about Equifax. Honestly, we probably wouldn't even know who Equifax is.
[00:34:13.02]
Ned: I still don't remember half the time.
[00:34:16.26]
Chris: These types of policies, that has to be part of how you think about security in terms of the human element. Is it the same thing, configuration management? Is it the same thing as responsible policy? I mean, yeah. Because if you have blanks where you don't even know to look, then you're never going to be in a situation where you can patch everything that needs to be patched. Then you'll end up on the news.
[00:34:46.23]
Ned: You need that level of situational awareness and what your environment actually looks like. Ideally, as a security person, you are consulted early on enough in the process of new things being rolled out, new architectures being built, that you can help guide them in a way that adheres to your policies and also makes it easy to maintain over time. Right.
[00:35:14.06]
Chris: I mean, do you wear a seatbelt when you're in your car? And if you're saying no, don't be an idiot. Wear your goddamn seatbelt.
[00:35:21.24]
Ned: But I'm free not to. My freedom.
[00:35:25.28]
Chris: You're free to go through the windshield at 45 miles an hour as Let me know how that works out for you.
[00:35:35.11]
Ned: Indeed. I have encountered the... It's rare now, but I used to encounter a lot of people that were anti seat belt for some reason. It was usually something stupid, philosophical. I was like, Shut up and buckle up or get out of my damn car. Anyway, not to get all dad on the situation. I guess the other part of the equation is that security seems like a cost center on top of a cost center. It's already a cost center, and now to secure it is yet another cost center on the thing that's already costing me money. For some businesses, it is a conscious decision not to prioritize security because they've done the risk-reward analysis and said, You know what? If we get breached, fuck it. Who cares? How much What blowback are we actually going to get from our customers because we lost all their data? And sadly, the answer is very little. True. I mean, it's not a moral choice. It's not ethically sound. But if you are purely driven by the risk and the reward, sometimes security doesn't make sense.
[00:36:54.14]
Chris: There is a ironic and very dark joke in ISO 27001 circles where if you're in trouble and you have to pass an audit today, all you have to do is click risk status accepted and just move on with your life.
[00:37:07.08]
Ned: I guess that's true.
[00:37:13.20]
Chris: But don't do that. Tldr, don't do that. Yeah, please don't.
[00:37:20.07]
Ned: There are ethical implications. I think choosing not to engage with security is a short term, maybe a short term strategy Maybe that will work out for you, but in the long term, it's something that will eventually drag you down. Because not only do you get breached, but not patching your systems makes them less effective. Every hack requires additional manpower to fix. Not having automated systems to take care of things means that you're constantly in a fire drill whenever something goes wrong. So yes, maybe it works out well in the short term, which is all the CFO can think about because they're watching the stock price. But for the people on the ground, not embracing security is a failing long term strategy. I think we've seen that over time that most companies who want to be highly effective and high performing also have good automation and good security practices in place.
[00:38:25.22]
Chris: It's one of those things where it might seem like it's a challenge It is. I'm not saying this is easy. But the benefits, like you said, first of all, you don't have to run around like your hair's on fire every time you have an incident because you have a plan for that. You're going to have less incidents in general. All of your operations, daily, weekly, monthly, are going to be easier. It's the rare case of a win-win-win-win-win as long as we do the hard work upfront.
[00:38:56.15]
Ned: But I don't want to.
[00:39:00.26]
Chris: That's a fair argument. I'm going to turn off all this vulnerability management stuff.
[00:39:04.14]
Ned: Okay, excellent. Hey, thanks for listening or something. I guess you found it worthwhile enough if you made it all the way to the end. So congratulations to you, friend. You accomplished something today. Now you can go sit on the couch, turn off all your audit logs, and play something on Xbox. You've earned it. You can find more about the show by visiting our LinkedIn page, just search Chaos Lever, or go to our website, chaoslever. Com, where you'll find show notes, blog posts, and general Tom foolery. We'll be back next week to see what fresh hell is upon us. Tata for now.
[00:39:40.14]
Chris: There's a YouTube video that I will send in your direction where it's an old recording from local news when this went into effect, and people were basically saying that stopping them from being allowed to drink Budweiser while driving makes the law un-American.
[00:39:57.08]
Ned: You can't get for being drunk on a horse.
[00:40:03.16]
Chris: Rift the Horse is drunk.