OpenAI’s Freedom Salad and the Two-Page Apocalypse | Chaos Lever

Biden’s executive order on AI safety was 111 pages of not-terrible ideas like protecting privacy and creating AI guidelines. Naturally, big tech was *not* a fan. Because when you ask Meta and Google to behave responsibly, they act like you just insulted their mom.
Meanwhile in Europe: The EU held its AI Action Summit in Paris, making it clear they’re not messing around with AI governance. Public interest, worker protection, and global cooperation were on the table. Investors dangled €150B like a carrot—if only the EU would be a little less…protective of its citizens. 🙄
🧠 Then came Trump's executive order, aka the “let’s delete all the thoughtful stuff” memo. A whole two pages long, it replaced nuance with “make America #1 in AI because democracy and stuff.” Or, more accurately: “drill, baby, drill” but for GPUs.
📄 Enter OpenAI’s response to that call for action. On the surface, it’s just another document—but wow, the vibes are chaotic. There’s flag-waving, fear-mongering about China, and a healthy dose of “we want your data and your blessings.” Also, violently incoherent sentences that barely represent English.
📉 What *wasn’t* in OpenAI’s proposal? Anything about ethics, safety, upskilling displaced workers, or protecting vulnerable communities. But don’t worry—they did include buzzwords, bad logic, and more patriotic tech posturing than a Fourth of July parade.
LINKS:
🔗 Executive order 14110: https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence
🔗 OpenAI’s Response to the RFI: https://cdn.openai.com/global-affairs/ostp-rfi/ec680b75-d539-4653-b297-8bcf6e5f7686/openai-response-ostp-nsf-rfi-notice-request-for-information-on-the-development-of-an-artificial-intelligence-ai-action-plan.pdf
🔗 The original RFI: https://www.federalregister.gov/documents/2025/02/06/2025-02305/request-for-information-on-the-development-of-an-artificial-intelligence-ai-action-plan
🔗 Trumps AI EO: https://www.federalregister.gov/documents/2025/01/31/2025-02172/removing-barriers-to-american-leadership-in-artificial-intelligence
🔗 Forbes Article: https://www.forbes.com/sites/dianaspehar/2025/02/10/paris-ai-summit-2025-5-critical-themes-shaping-global-ai-policy/
[00:00:00.07]
Chris: I simply consider eating and drinking to be very active pastimes. If there's not spaghetti on the ceiling, you're doing spaghetti wrong.
[00:00:12.00]
Ned: Makes a lot more sense than throwing it at the wall.
[00:00:14.17]
Chris: Well, that's amateur hour. I mean, come on. Have some respect for yourself.
[00:00:18.29]
Ned: What are we even doing here?
[00:00:20.04]
Chris: Continuous improvement.
[00:00:23.21]
Ned: Going to put that pasta through a pipeline and inject it to the ceiling. Up and to the right, as they say. That's what they're talking about, right? Pasta?
[00:00:34.03]
Chris: I mean, the expression is back into the left, but yes, other than that, you nailed it.
[00:00:38.19]
Ned: Yeah, I think Fat Joe said it best. He said, Lean back. The fact that I leaned back makes this really good for anybody who's listening to the podcast.
[00:00:52.10]
Chris: As usual.
[00:00:59.10]
Ned: Hello, Alleged Human, and welcome to the Chaos Lover podcast. My name is Ned, and I'm definitely not a robot. I am a real human person who enjoys making pasta from scratch and feeding it to my brood of small minions. They don't throw that pasta on the ceiling because that would be disrespectful. With me is Chris, who is also on the ceiling, possibly dancing?
[00:01:29.23]
Chris: You beat me to it, bastard. Now I don't know what to say.
[00:01:33.10]
Ned: Yes. You say, You're not worthy, and you do the little bow thing, and then swing because it's 1995.
[00:01:43.23]
Chris: You say, Tomorrow. Let's call the whole thing off.
[00:01:47.06]
Ned: I say Wayne's World. Party time and excellent. I know it's not a serious movie, but Bill and Ted's Excellent Adventure really did have some important lessons to teach.
[00:02:07.15]
Chris: Be excellent to each other?
[00:02:09.22]
Ned: Party on, dude.
[00:02:12.19]
Chris: Don't go swimming 30 minutes after eating.
[00:02:15.10]
Ned: Of course.
[00:02:17.18]
Chris: Do your taxes in a reasonable time. Pay your taxes at all.
[00:02:23.03]
Ned: Yeah. I mean, really important stuff. Anyway, so I Man, I put on my sassy pants for this week. Not going to lie. Trying to channel my inner Ed Zitron, and I think I did it.
[00:02:40.02]
Chris: I'm just wondering if I'm even needed for this.
[00:02:42.13]
Ned: I mean, I need someone to yell at, and that's going to be you.
[00:02:47.07]
Chris: Oh, well.
[00:02:49.23]
Ned: If I ask for your input, I don't want you to actually respond.
[00:02:53.18]
Chris: Fair.
[00:02:54.04]
Ned: Just so we're clear. I entitled this one OpenAI, where the AI stands for Assholes Incorporated. I really think that works on so many levels, don't you?
[00:03:08.14]
Chris: It's the opposite.
[00:03:09.18]
Ned: I didn't want you to respond, Chris. I thought we were clear.
[00:03:12.04]
Chris: I'm sorry, sir. Sorry, sir.
[00:03:14.12]
Ned: No, I'm kidding, of course. You can respond. I'll just mute you and post. Okay. Openai published a response to a request for a response for AI action plans for the next few years. We'll get into the meat of the response and why it infuriated me so much that flames, but We did that last week. Never mind. But first, I feel like we need a little context on where this is all coming from. On November first of 2023, President Biden signed an executive order called Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which laid down eight principles that should guide the US policy in the development and use of AI. I'm not going to read all eight principles. You can look them up. There's a nice little infographic that they put together. But the important ones are stuff like ensuring the safety and security of AI technology, protecting consumers, patients, passengers, and students, and protecting privacy. Not a terrible list of things you might want to consider when adopting or evaluating artificial intelligence. The federal government is a massive institution that can be a guidepost on thoughtful AI adoption. Now, I want to note that I did not read the entirety of the executive order because, Chris, it's 111 pages long.
[00:05:01.17]
Chris: It's a lot.
[00:05:02.29]
Ned: It's slightly longer than a normal executive order from what I can tell. I believe this might be the record holder.
[00:05:10.28]
Chris: More like executive border, am I right?
[00:05:13.11]
Ned: Aha, well played. But what I did read from it, the first few sections, and then browsing here and there, it made sense to me. Ai is coming, we know that. It could have unprecedented impact on our society, maybe Maybe we should proceed cautiously, ensure it doesn't negatively impact disadvantaged people. Maybe stop it from invading our privacy and help it lift workers up instead of simply displacing them. I'm okay with this, and so was the public. The general reaction was positive. Across party lines, the overall approval rating was 69%. Yes. Advocates Efficacy groups were impressed at the thoroughness of the executive order and the actionable items that did things like create chief AI officer positions, direct NIST to develop new AI guidelines, and have the DHS create AI cybersecurity guidelines through CISA. You know who didn't like this? You want to take a guess who was not so happy about this?
[00:06:27.13]
Chris: Big oil.
[00:06:28.29]
Ned: Oh, probably not. They probably didn't like it either. The big tech firms, they fucking hated this thing. They said, Is it what's going to stifle innovation and prevent new competitors from entering the marketplace. It creates too much red tape and regulation. That's how they sound when they talk.
[00:06:48.03]
Chris: So it is big oil?
[00:06:50.04]
Ned: I mean, like, yeah. It's the same thing that they've said for years and years whenever the US tried to restrict drilling in any way. There's a lot of parallels between the oil boom of, I guess, what would you place that, the late 1800s, early 1900s, and the AI boom that's happening now.
[00:07:16.13]
Chris: No, I think it was a documentary that was made about that a few years ago. Something about milkshakes, can't remember the details.
[00:07:24.05]
Ned: No. The Chamber of Commerce, which is not a governmental institution, despite its name, was fiercely against the executive order, and it counts members like Metta, Google, and Microsoft among its ranks. Biden signed this, and companies were pissed. If you wondered why these big tech companies all got right behind Mr. Trump, I mean, this isn't the only reason, but it's a reason.
[00:08:00.15]
Chris: Yeah.
[00:08:03.07]
Ned: Fast forward to February of this year. The EU, the European Union, has been far more aggressive in policing AI than America, and just enacting policies and standards in general to protect consumers' privacy, way ahead of the curve. They've sought to restrict what data AI models can vacuum up and created an opt-in rather than an opt out model for data collection. I am generally in favor of these things. The AI Action Summit was held in Paris in February, and it's the third such global summit meant to address the risks, challenges, and opportunities presented by AI's advancement. The summit centered around five themes: public interest AI, the future of work, innovation and culture, trust in AI, and global AI governance. The future of work theme was meant to address the impact of AI on low and high-skill workers and the need for reskilling of those impacted or displaced by AI. This is a thing that is definitely going to happen. People's jobs are going to disappear. Maybe we should help them.
[00:09:21.25]
Chris: Not really sure why you're putting that in the future tense.
[00:09:26.02]
Ned: People's jobs are disappearing because of AI. There you go. All right. What's interesting is this is one of the few times where automation and AI is actually going to impact what people typically call white-collar work. Generally, automation has been mostly impacting people who work in things like factories, people we don't care about. Now it's going to start impacting people who are skilled in technology and media. We're going see a lot more about the fact that they're being displaced. The summit also rightly notes the lack of consensus or a coherent approach to AI governance. It is all over the place. The EU does have their own thing, but the UK is different, US is different, Latin America, that's a lot of different countries. They don't have anything coherent going on down there, and China is doing their own thing. The summit does rightly point out that an AI arms race is happening between the US and China, and the EU is currently left out in the cold when it comes to this arms race. During the summit, France announced that they're going to spend €103 billion on AI development in the country. A group of global investors said that they would spend €150 billion on investment in AI in the EU in general, but they made it contingent on the EU adopting a, quote, more competitive framework, which I think is read as accommodating to big tech.
[00:11:13.13]
Chris: Right. Especially considering its global investors.
[00:11:17.00]
Ned: They're like, Hey, we'll give you a whole bunch of money if you repeal all the protections that you have for your citizens.
[00:11:25.02]
Chris: We'll give you a bunch of money if you give us a bunch of money.
[00:11:28.17]
Ned: More or less.
[00:11:30.08]
Chris: This is your job?
[00:11:31.26]
Ned: It's wild. Our esteemed vice president, Nadia Elon Musk, the other one, the human Cabbage Patch doll that has a generally punchable face, Vance. That's his name. He was there.
[00:11:47.25]
Chris: I think that's a bit of an insult to Cabbage Patch dolls.
[00:11:51.06]
Ned: Yeah, I apologize to my Cabbage Patch doll when I was eight. He said at a speech during the conference, The AI future is not to be won by hand wringing about safety. That's really saying the quiet part out loud, isn't it?
[00:12:08.17]
Chris: He really is a hateful little shit.
[00:12:11.15]
Ned: It's another way to put it. But at least he's being honest, fuck safety. Go, go, go, was basically what he said. That sets the stage for Trump's executive order and the response by OpenAI. So one of the many many executive orders Trump signed in the first days of his second term, effectively canceled Biden's executive order, and sought to rescind any changes made by that previous executive order across all government agencies and departments. It was basically like, Hey, whatever you did, cut it out. The eight principles were replaced with a single policy. It is the policy of the United States to sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security. Or if I may paraphrase, drill baby Drill. That's always going to come back to that oil thing.
[00:13:19.20]
Chris: I never had a doubt.
[00:13:22.16]
Ned: The executive order was a massive two pages long. Yes. You heard that correctly? Biden's was 111 pages, and it contained carefully considered and conscientious thoughts and orders. Trump's was two pages with a one sentence policy. Need I say more?
[00:13:46.09]
Chris: I mean, I'm proud of everyone involved. They must have spent minutes on this.
[00:13:51.07]
Ned: Possibly tens of minutes. Whoa. Whoa. Clearly, I have more to say based on the length of this the script. We haven't even gotten to OpenAI's response. Why does that response exist? Because part of the executive order was a call for proposals to develop an artificial intelligence action plan. Openai responded, and this is their action plan and how much it sucks. I am wildly optimistic about how absolutely cynical this response is. At first, I started reading it. It's not that long. It's like 15 pages. I started reading it like a normal human person would, and it sounded insane. There are sentences that we will get to that just, Okay. Then I started reading it like a member of the Trump Whitehouse who's deep into the Maga cult mind frame, and it all started to make sense. Take this wild ass sentence, for example. As America's world leading AI sector approaches artificial general intelligence with a Chinese Communist Party determined to overtake us by 2030, the Trump administration's new AI action plan can ensure American-led AI, built on democratic principles, continues to prevail over a CCP-built autocratic, authoritarian AI. Now, I know that was a long quote.
[00:15:30.29]
Ned: But a few questions. Are we approaching AGI? Will the PRC overtake us by 2030? Are we building a democratic AI? What the hell is an autocratic, authoritarian AI?
[00:15:47.19]
Chris: So what you're saying is this is the first time you've ever seen a gish gallop.
[00:15:53.05]
Ned: I've seen it before, but this sentence is nonsense.
[00:15:58.01]
Chris: Well, not only is it nonsense, it's so much nonsense piled on top of completely different and unrelated nonsense that your brain simply stops listening.
[00:16:06.12]
Ned: And that's precisely what happened. I'm sure a listener has glazed over halfway through that sentence. They're just like, words, words, words. Okay. Anyway, moving on from that. The response then has a set of statistics. You know what they say? Lies, damn lies, et cetera. Out of the four points that it makes that include citations, two of the citations are directly from OpenAI studies, and one is a commissioned study from Samsung. Not exactly organizations that don't have a vested interest. The only statistic I trust is from Pew, and it's the one that claims that 7 in 10 parents think their kids will have it worse off financially. Because they're right.
[00:17:03.11]
Chris: Meaning because of AI?
[00:17:05.23]
Ned: Just in general. It actually had nothing to do with AI.
[00:17:08.28]
Chris: I was going to say because they already have it worse than before.
[00:17:12.10]
Ned: Yes.
[00:17:14.01]
Chris: This is the first time 30-year-olds have it worse than their parents had it when they were 30 years old. Congratulations, Baby Boomers. You nailed it.
[00:17:20.28]
Ned: Totally. Just wait until the tariff wars of the late 2020 is really kick in the gear. The proposal makes great hay of the phrase, freedom of intelligence, which is the freedom to not use your intelligence, I guess.
[00:17:40.25]
Chris: I mean, technically, that is a freedom.
[00:17:41.29]
Ned: I guess,.
[00:17:44.03]
Chris: You're allowed not to breathe.
[00:17:46.14]
Ned: The freedom of breath. Now, they do define it, By which we mean the freedom to access and benefit from AGI protected from both autocratic powers that would take people's freedoms away and layers of laws and bureaucracy that would prevent our realizing them. Chris, my brain hurts, so can you parse any of that?
[00:18:15.12]
Chris: We totally promise no takesy-backsies that AGI is going to help, but only our AGI, because all the other ones are bad AGI's, and they're going to cancel your favorite TV shows like Netflix on a bender.
[00:18:28.03]
Ned: Okay, I mean, that makes as much sense as anything that I could get out of it. It just feels like word salad. Probably written by AI. Openai also tries to make the case that AI is getting faster, cheaper, and better-er-er. The scaling laws that predict these gains are incredibly precise over many orders of magnitude, so investing more in AI will continue to make it better and more capable. Fucking what? That's not how things work.
[00:19:00.12]
Chris: No. And that's also why when you watch television and you see, I don't know, Schwab has an article about some retirement plan, it says, Past performance is no indicator of future results.
[00:19:14.21]
Ned: Yeah. Just because you drew part of a curve doesn't mean you know how that curve ends. Because this isn't physics, this is peoples.
[00:19:25.15]
Chris: Well, and even physics, you can build as nice of a parabola as you'd like. But if there's a building in the way, it's not going to get to the ground.
[00:19:34.27]
Ned: Precisely. They followed that up with, We believe that the socioeconomic value of linearly increasing intelligence is super exponential in nature, which makes it much clearer.
[00:19:50.13]
Chris: Okay, so that's not a sentence.
[00:19:53.17]
Ned: It's a sentence in the response, but I agree, it's a sentence that means nothing, even in context. Then they followed it up with, The amount of calendar time it takes to improve an AI model keeps decreasing, which is not even remotely true, because I don't know if you've noticed, but I don't see GPT-5 anywhere.
[00:20:22.03]
Chris: Any second, though. But they've got 4. 49999 and 4. 9999991 Yeah.
[00:20:31.23]
Ned: In fact, evidence from many researchers points to the fact that we've hit some fundamental limits on the way we're training LLM models today, and it's actually slowing down.
[00:20:44.21]
Chris: Oh, Yeah. It's also that we've hit fundamental limits on the information that they can train on.
[00:20:50.11]
Ned: Well, we'll get to that. But then the response turns to China and Deep Seek. Not because Deep Seek is eating their lunch, but because commies are bad and scary, Chris. Deep Seek is bad because the PRC can compel them to change the model and use it to spy on people. Open AI wants to be the ones allowed to spy on people and capriciously change their model to suit their needs.
[00:21:18.26]
Chris: I was going to say this is always a hilarious argument to me because it's always made in bad faith because they've apparently never heard of A-L-E-X-A. See, I I've almost said it out loud, but I'm learning.
[00:21:31.22]
Ned: You did, and I'm glad you saved yourself there. We're learning, slowly.
[00:21:36.04]
Chris: Unlike A-L-E-X-A. Nailed it.
[00:21:39.28]
Ned: Burn. Of course, Deep Seek is available to run locally and is far more open, which is clearly a trick by the dastardly PRC to gain global supremacy by 2030. Side note, so what Deep Seek released was not everything. It is open in the sense that you can grab it and run it, but they didn't release everything about the training data and the weights and all that stuff. However, there is a really cool effort on Hugging Face right now called Open R1 that is seeking to reverse engineer R1, and they're making serious progress. A lot more than you can say about anything Open AI, ironically has.
[00:22:24.22]
Chris: Yeah, and if people want to know more about that, we actually touched on a lot of this stuff in Ned's favorite episode of the season, which was all about, you guessed it, Deep Seek.
[00:22:34.13]
Ned: Yes. It was a good one. You wrote it, so I didn't have to. They also throw some modest shade on Deep Seek as a model. R1's reasoning capabilities, albeit impressive, are best on par with several US models. Technically true, but also way cheaper. So there's that.
[00:22:58.03]
Chris: Also technically true, I mean, That's a very much an it depends conversation.
[00:23:02.28]
Ned: Sure. Because Deep Seq is simultaneously state-subsidized, state-controlled, and freely available, the cost to its users is their privacy and security. That is especially rich coming from a tech industry that has thrived on capturing personal data through ostensibly free software and services. Like, Bro, do you even Google? By the way, we're still on page three.
[00:23:35.29]
Chris: This is a nice follow-up to the discussion we had about Black Hat last week because I feel like the vitriol is equal. Only, I will say that in my case for Black Hat, at least that was fiction.
[00:23:50.07]
Ned: Well, I mean, if we're being honest, there's a lot of fiction in this response. Here's the crux of their argument. We need to beat China. That's it. Every component of the proposal is composed in terms of us versus them. China and the CCP are the enemies, and we need to develop American AI that is democratic. What is a democratic AI? Don't worry. Just more nonsense. The best way that they say to beat China is to copy what China is doing as much as possible. To wit, If the PRC's developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over. America loses, as does the success of democratic AI. They're doing a bad thing, so we need to do the same bad thing, but faster.
[00:24:56.04]
Chris: I guess we're going to ignore the part where everybody knows that we're already doing the bad thing, and they just don't want to get sued about it anymore.
[00:25:04.29]
Ned: Yeah. The folks at OpenAI are mad that IP holders won't let them read all their stuff.
[00:25:11.10]
Chris: For free.
[00:25:11.22]
Ned: For free.
[00:25:12.26]
Chris: For free.
[00:25:14.10]
Ned: They want the federal government to make private companies to give OpenAI access to all of their IP, to save the world. All of this reminds me of the USSR Boogie of the 1980s. Chris, you and I are old, so we remember this. The Soviet Union was a legitimate global superpower with lots of nuclear weapons that could eradicate all life on the planet. That was true. What we didn't know at the time, but became obvious in hindsight, is that they were also a crumbling kleptocracy that was nowhere near as powerful as the United States. But we used the hysteria to vastly inflate our military budget and implement policies that infringed on people's rights. Ram Emmanuel famously said, You never want a serious crisis to go to waste. If there is no legitimate crisis, it's best to manufacture one. He didn't say that part, but I think it was heavily implied.
[00:26:22.07]
Chris: Subtext, yeah.
[00:26:24.26]
Ned: That's what OpenAI wants to do. They want to create a crisis so that they can justify whatever actions they propose, and that's exactly what they're doing. In addition to pilfering everyone's IP, they also want to push infrastructure construction into overdrive, and that includes more than just data centers. A National Transmission Highway Act, as ambitious as the 1956 National Interstate Defense Highways Act to expand transmission, fiber connectivity, and natural gas pipeline construction. Fuck the environment. We need more American AI. Speed up the permitting for building AI infrastructure like new solar arrays, wind farms, and nuclear reactors. This could include creating categorical exclusions to the National Environment Policy Act, such as a national security waiver, given the global competition for AI leadership. I feel like solar arrays and wind farms and nuclear reactors sound pretty good. But that second half where they're like, and we don't have to abide by any of the EPA stuff? That seems bad.
[00:27:49.03]
Chris: It seems like that might have been one of those burying the lead sentences.
[00:27:54.19]
Ned: They front loaded it. It's like a shit sandwich that's missing the other half of the bread.
[00:28:02.28]
Chris: Yeah, well, bread is wasteful.
[00:28:06.25]
Ned: Those open AI. You know what's not in the proposal? Anything about environmental impact, grappling with AI ethics and safety, upskilling displaced workers, protecting a vulnerable population. None of that made it into the proposal. They barely mentioned anything along those lines, which is probably because the current administration doesn't care about any of that. Why should OpenAI? Because maybe you should do something that's morally good.
[00:28:47.01]
Chris: You're being ridiculous again. What is in that coffee mug?
[00:28:50.04]
Ned: It's not coffee. A little bit of real talk. Feel free to weigh in this, Chris. The Long term impact of AI on our society is impossible to predict. It's like trying to predict the existence of Twitter in 1998. You might get some vague outlines right, but the actual thing, what's really going to happen, What happened? You don't have a damn chance. No one knows. Do I think AI is going to transform the way we work and play? I think that's a stupid question. It's essentially asking, do I think technology is going to change the way we work and play? And that answer is obviously yes. Podcasts didn't even exist 30 years ago. Is that the saddest?
[00:29:41.21]
Chris: What was the point of anything, really?
[00:29:43.18]
Ned: I know. What the hell did we do in 1998, Chris? The race for AI supremacy, whatever the hell that means, has the potential for great harm. I think OpenAI and other private companies are going to be cavalier about that risk. Well, I mean, when I say going to be, they are cavalier about that risk. Let's be clear. Google basically fired most of their AI ethics staff. So did Meta, I think. They just also cleared house. Microsoft, same thing. It's almost like they don't give a shit about the ethics of AI. So who's going to care? Normally, I'd say it would be up to the governments of the world to be the responsible adults. How's that working out?
[00:30:42.17]
Chris: Could be better?
[00:30:43.01]
Ned: Could be. Ideally, I would like them to approach it in a nuanced way and think about the actual implications of what it means when we're going through this AI gold rush. It doesn't mean abandoning AI. That's But it does mean approaching it with caution and concern. Right now, our government is giving OpenAI and others the green light to move fast and break things, and that gives me great concern for the future.
[00:31:16.09]
Chris: Yeah, it's not going to be good. How do you monitor shelter?
[00:31:20.25]
Ned: I admit this was a bit of a downer of an episode. I I think that there is great potential for good that could be delivered by AI tools, just like there's great potential for good with any new technology. But there's also great potential for harm. I just feel like we need to balance on that and acknowledge it. I really wish that we had governments and companies that were willing to do that balancing, since we I guess it's up to us.
[00:32:05.16]
Chris: I don't have any real good- What you're saying is bomb shelter in a different country? Ideally, one that doesn't get bombed all that much.
[00:32:16.09]
Ned: I hear Luxembourg's nice.
[00:32:20.17]
Chris: That's way too many syllabus, so that'll be a bit of a problem, but I think we can work around it.
[00:32:27.01]
Ned: All right, cool. Well, thanks for listening or something. I guess you found it worthwhile enough if you made it all the way to the end. So congratulations to you, friend. You accomplished something today. Now you can go sit on the couch, fire up ChatGPT, and ask it why it sucks so hard. You've earned it. You can find more about the show by visiting our LinkedIn page. Just search Chaos Lever or go to the website chaoslever. Com. We will find show notes, blog posts, and general Tom Fouhry. We'll be back next week to see what fresh hell is upon us. Ta-ta for now.
[00:32:59.20]
Chris: I'll tell you the the mistake I made at work the other day. To have this coffee machine that's fancy. It's got a lot of buttons. I pushed a button. I thought I was getting coffee. What I was getting instead was four shots of Espresso. I'm going to go ahead and be awake until Tuesday.