April 10, 2025

Why Your AI Assistant Still Sucks (And How MCP Might Help) | Chaos Lever

Why Your AI Assistant Still Sucks (And How MCP Might Help) | Chaos Lever

This week’s main dish? Agentic AI and the Model Context Protocol (MCP). What the heck do those mean? Why are they being compared to USB-C? And why should you care unless you’re an executive with a robot butler? Ned breaks it all down while Chris offers the occasional therapy check-in. Spoiler alert: MCP is the plumbing behind smarter AI assistants, but whether we trust them with our calendar (or our lives) is still up for debate.

Oh, and yes, there’s a “Silver Spoons” reference, some Carlton love, and a side quest into RESTful APIs because this is Chaos Lever and we can’t stay on the rails. Literally. We try to unpack whether MCP could be the REST of the AI world or just another shiny-but-useless indoor train. Buckle up.

🔗 LINKS
Model Context Protocol: https://modelcontextprotocol.io/introduction
The Train: https://external-preview.redd.it/T4x6zmXqtoaJQxw8uhtcNdquSLFHualiTg1Gnac_ihA.jpg?auto=webp&s=6b728fb53bfab7cbb77d1bc54714f9362d33c4b5

00:00 - - Almanacs and other lies

00:45 - - Indoor trains and childhood dreams

04:00 - - Easter Bunny truth bombs

06:00 - - Robot butlers and sad AI assistants

10:00 - - Agentic AI: What is it?

16:00 - - Real-world examples and use cases

21:00 - - RESTful APIs and why they matter

27:00 - - MCP explained: tools, prompts, and resources

33:00 - - Trust issues and destructive autonomy

34:50 - - Final thoughts and indoor train predictions

[00:00:00.00]
Chris: What's interesting, I don't know if you read, but the Farmer's Almanac, right? Comes out every year.


[00:00:06.19]
Ned: I'm aware that it exists. I've never actually read the Farmer's Alminac because it always seemed like total hokum.


[00:00:12.27]
Chris: If by that you mean absolute accuracy, then yes, what you said.


[00:00:20.05]
Ned: That's not what I meant.


[00:00:22.22]
Chris: But yeah, lately, you look at the 2025 edition that came out, obviously right before 2025. It's the shortest edition they ever put It's actually a leaflet that says, Fuck if I know.


[00:00:44.16]
Ned: Hello, alleged human, and welcome to the Chaos Lever podcast. My name is Ned, and I'm definitely not a robot. I'm a real human person who goes outside for long walks in very odd weather, and sometimes I even bring my dog with me. With me is Chris, who also comes with me sometimes. No, you don't. You're not allowed.


[00:01:08.13]
Chris: Wow. Also, wow.


[00:01:11.08]
Ned: I thought about it, and you were rejected immediately.


[00:01:14.11]
Chris: Also not a dog.


[00:01:19.07]
Ned: No. The reason you were rejected immediately was because of my dog. We've had conversations about you, and she is just not on board.


[00:01:28.19]
Chris: That's fair. I'm not disagreeing with the assessment, to be clear. Okay.


[00:01:34.17]
Ned: Well, I mean, you shouldn't because she's been spot on about everything else.


[00:01:38.04]
Chris: I mean, I've met me, and it's a dark, never-ending hole of sadness. And sometimes a peanut butter and jelly sandwich. Oo.


[00:01:47.20]
Ned: Crunchy peanut butter, baby. The only bright spot in an otherwise dark and uncaring universe.


[00:01:56.01]
Chris: Anyway.


[00:01:57.19]
Ned: Are we skipping the small talk already.


[00:02:02.15]
Chris: I'm tired of your attitude, frankly.


[00:02:04.28]
Ned: So you want me to talk for the next half hour? I get it. I get things. Yeah. I am not going to talk about Hedy Lamar this week because I frankly did not have enough time to put together something that was complete enough that I would feel proud about presenting to you and our audience.


[00:02:30.00]
Chris: So you forgot all about it until eight seconds ago is what I'm hearing. How expected.


[00:02:40.10]
Ned: No, I thought I could... I wanted to talk about something else that has been on my mind because it's come up recently multiple times through different channels. So I know it's like... I don't know if it's a real thing, but it's like in the zeitgeist, if you know what I mean.


[00:02:58.25]
Chris: Are we talking about the Easter Bunny?


[00:03:00.26]
Ned: Of course. Hey, so I was just watching this thing about the Easter Bunny, and it turns out that it does not derive from some pagan God. Instead, it was a Germanic tradition. That was founded sometime in the 1400s or a little bit later. Nothing to... Yeah.


[00:03:23.19]
Chris: And nothing to do with Oster.


[00:03:25.11]
Ned: Nothing at all to do with Oster. So you can just Put that in your pipe and smoke it all you people out there.


[00:03:35.14]
Chris: Kelt's, I assume.


[00:03:37.02]
Ned: I don't know. Let's talk about MCP. I'm going to start with Agent Agenic AI, and I saw you shiver a little bit. I do not like the term either. I really wish we could have come up with something better. Alas, we have not. We've got Agenic AI. Along with the rise of Agenic AI is a new protocol that's meant to support AI agents. That's called the Model Context Protocol or MCP. If you're a person of a certain age, that immediately makes you think of Tron And I'm sorry, there's nothing I can do about that.


[00:04:18.25]
Chris: Also, go watch Tron. Still good movie.


[00:04:20.27]
Ned: Still good. Maybe not the sequel, but like the first one. Watch the first one, listen to the soundtrack from the second one. Nice. Yeah. Since AI does not appear to be slowing down and in fact, seems to be swallowing the entire world or at least eating all of our energy, I thought we could talk a little bit about what agentic AI is and why MCP matters. But before that, some context. So let's all sit down in our bean bags. We're going to get in our context corner and have a comfy time. You ready?


[00:04:54.29]
Chris: No.


[00:04:56.02]
Ned: Okay, well, you need to get some fro-yo, too, because we're going back to the '80s, baby. There was this show called Silver Spoons. It starred Ricky Schroeder, who I am choosing not to look up, as I assume that, like all child actors, his life did not turn out well. But on the show, his life was amazing. His dad was this super-rich titan of industry who basically had the personality of an eight-year-old. He gave Ricky pretty much whatever he wanted. He had a race car bed. He had a kid-sized train that ran through his house and he could ride, and a robot butler. I put an image of the train in the dock. Just so you could enjoy that, Chris. Maybe I'll add that to the post or the show notes as well. The reason I did that is because I wanted that train. I wanted that train so fucking bad. I don't even know why. It is completely impractical for any indoors means of transportation. It's just it is the dumbest possible way to get around your house. But my stupid eight-year-old brain absolutely wanted that train. The closest I ever got was a wooden train play structure they had at my elementary school.


[00:06:23.12]
Chris: I mean, I keep telling you, put it in your house now. Don't let your dreams be dreams.


[00:06:29.02]
Ned: God, you are so right. All right. Well, that's all we have for today, people. I'm going to go buy a mini train. I'm sure the wife would love that. Also on the show was a young person by the name of Alfonso Ribero, who you probably know as Carlton from the Fresh Prince of Bel Air, or if you're Gen Z or Gen Alpha, first of all, thanks for listening. My God. But also, you probably know him as the current host of America's Funiest Home Videos, not Bob it because, well, he's dead, but he also stopped doing that.


[00:07:05.21]
Chris: I didn't even know America's Funiest Home Videos was still a thing.


[00:07:08.11]
Ned: It sure is, and my kids love it. And with the prevalence of smartphones in the world. The videos, there's a lot more of them. It's a lot easier. Okay, totally off the rails in the first section. It's been a long week, Chris. And yes, the train joke was intentional. What I want to focus on is the idea of the Robot Butler, an AI assistant that does your menial chores for you, keeps your life in order. That's been a dream going back decades, if not centuries. Think Rosie, the Jetsons maid, or that sad robot in Rocky IV, who had to deal with... What was that guy's name? Rocky's best friend. Let's go with Vinnie. Pauley. Pauley was his name. The poor robot had to deal with Pauley on his cigars. Of course, pretty much any sci-fi feature that takes place in the future, there's always robots or computers. We have this vision of AI helping us to manage the mundaneity of everyday tasks. So how's that going? Chris, why don't you tell me how much you love Siri?


[00:08:27.07]
Chris: Why'd you have to say the name out loud?


[00:08:29.29]
Ned: I mean, I don't have an Apple device in this house, so it's not my problem. When was the last time that you had your, I'll just say AI assistant, I won't say the name again, do something for you beyond playing a song, telling you the weather, or setting a Timer?


[00:08:49.21]
Chris: I don't even use it for that last one.


[00:08:52.19]
Ned: Oh, that's my main use case is setting a Timer in the kitchen.


[00:08:56.29]
Chris: Yeah, I have a Timer in the kitchen for that.


[00:09:00.01]
Ned: Yeah, but then I have to reach out. Anyway, so how often does it screw up even those basic tasks?


[00:09:08.14]
Chris: I mean, it has been successful before.


[00:09:12.25]
Ned: Would you call it consistent?


[00:09:16.13]
Chris: I would call it something that was successful before.


[00:09:23.02]
Ned: Exactly. When it comes to helping us, most AI assistants are patently awful, if not outright malicious. They're bad at understanding us when we talk, the main thing they're supposed to do. Even if they do get the words right, they get the context wrong. Even if you phrase everything just perfectly, they still might get it wrong three out of 10 times. I'm being generous.


[00:09:52.04]
Chris: I had to deal with AI while scheduling something medical procedure because apparently having a human answer the phone is too challenging anymore. It was one of those AI, LLM connected monstrosities that every single time I said my date of birth, it told me, Just to confirm, you were born on What did they say? Oh, yeah, 2019. Yes, yes, robot asshole. I'm six years old.


[00:10:25.23]
Ned: You talk very well for a six year old. You should wash Watch your sassy mouth, though. Even if you can get the AI assistant to understand you, getting it to integrate with other services and tools is basically impossible. Every AI assistant does this integration differently, and it's made life hell for any developer who wants to write such an integration. One of the promises of a agentic AI is that the next generation of these AI assistants will be far better, low bar, and they're going to do it in a couple of different ways. Enter the LLM. If you think about the current crop of AI assistants that are bundled in with all of our smart devices. Basically, all of them were envisioned and created before the launch of generative pre-trained transformers that power ChatGPT and Copilot and other newer LLMs. It's part of what makes them so bad at interacting with you is they don't have what LLMs have. If you think about a conversation you've had with an AI assistant versus one that you've had with ChatGPT. Chatgpt, while it does hallucinate, also is just a much richer interaction. It's more pleasant to talk to.


[00:11:55.14]
Ned: It does seem like ChatGPT could actually be helpful in accomplishing tasks if it were able to take action in the real world. As an aside, this is probably why Apple was pushing Apple intelligence so hard. They know that the S word sucks and has sucked for a while, and that it is woefully behind when compared to current AI LLMs. Apple intelligence was supposed to be this giant leap forward because they needed to keep pace, only they botched it so hard, they had to push it back by at least a year, maybe longer. For one of the richest companies in the world, something is rotten in the center of Apple.


[00:12:44.03]
Chris: I would have said something is rotten in the state of Cupertino, but I'll allow your bad metaphor to continue.


[00:12:48.19]
Ned: Mine was worse. Yeah, okay. We'll go with yours. Chatgpt and its Ilk are able to produce text, images, and video Sorta. Sorta is pretty terrible, but I think it'll get better. You can also apply the same techniques to making music, which again, not great, not horrible. It'll get better. But right now, they can't do much outside of their browser window. They can't book an appointment for you, look up the current weather conditions in your area. They can't send an email to your mother, who is very disappointed you haven't responded to her message from yesterday about how disappointed she is in you in general. And why can't you be more like your sister? She went to an Ivy League and got married to Todd, who's the son that your mom always really wanted. Are you just going to be a podcaster for the rest of your life? No, mom.


[00:13:46.02]
Chris: Hey, buddy. You need a minute?


[00:13:48.09]
Ned: I'm okay. If we want LLMs to be able to take action on our behalf and get our collective mothers off our backs, we need to give them agency. That's what agentic AI is meant to do. What does it mean to give an AI agency? It means allowing the AI to go off and do some work on your behalf. You give them a goal, and then they're able to take actions, sometimes in multiple steps, to accomplish that goal. They do need some level of reasoning, which is why these new reasoning models have been coming out. Part of that is so that they can do this agentic thing where they have to reason through several steps to accomplish a goal. In some cases, you can have them take proactive measures without consulting you first. A little scary, but it depends on what you're asking them to do. This is not some far off idea, and we already have some examples of agentic AI today. The robots from Boston Dynamics, for instance, are able to navigate the real world and accomplish tasks given a particular the goal. The person operating the robot doesn't have to specify how many steps to take, how hard to grip something, or how to navigate the terrain.


[00:15:09.11]
Ned: You just give the robot, Here's a goal, and they use machine learning and reasoning to accomplish that goal. Another example might be an AI assistant that is able to help you plan your agenda for the next week. This is the thing I actually want. They could look at your calendar and figure out, Oh, look at that. Okay, so Ned needs to be in London next week for a conference. I'll book him at a hotel that's closest to the venue. I'll pick Hilton if possible because he has a membership through Hilton rewards or whatever. I'll book a flight that leaves from the correct airport and add a reminder on his schedule when to leave for the airport to make it on time. Oh, and I'll book a red eye coming back so he can make it in time for his son's jazz band competition. I'll reserve a ride for when he arrives at the airport and put together a list of local sushi places he might like to eat at. And finally, I'll email the details to his wife so she knows where the hell he's going and where he's staying because he will forget. Side note, I will.


[00:16:11.08]
Ned: These are the kinds of things that if you had a human assistant, if you were like an executive, these are the things that they would do, managing calendars, travel, transportation, hey, maybe even wardrobe. Is it going to be cold and rainy in London? Well, obviously. Well, here's a packing list for the outfits that you should wear. This seems like a thing that would actually be useful to me, selfishly. You could also see this thing happening from a coding perspective. Right Now I can use Copilot in VS Code to help me write an application. In fact, yesterday, I had this idea, what if I could take an RSS feed from my blog or from Chaos Lever, and every time a new item appears in the feed, ingest that, send it over to an LLM, have it create a blog post for me, and then send the blog post to me for editing and publication. That would be pretty cool. What did I do? I fired up Copilot, and I had it help me create a Lambda function that ingests RSS feeds and then has an AI summarize the contents and write a draft of a blog post.


[00:17:29.09]
Ned: I used Python Python as the programming language, and the actual amount of Python I had to write by hand was almost none. It created the files for me. Most of it worked on the first go. The biggest thing that I got hung up on was the way that Lambda forces you to package up modules for Python. It's annoying, but now I know how to do that. It couldn't do that for me, though, unfortunately. It couldn't do some other things. It couldn't check the files into source control for me. It couldn't actually deploy the code and the infrastructure to AWS. It couldn't test what I had deployed. At least not yet. I can see in the future that these integrations are already on the way, and the thing that will facilitate these integrations and the tools that are necessary to make them happen is the Model Context Protocol. Hey, we got there. Sort of. And the whole copilot thing was actually cool. I was surprised how quickly I was able to tell it what I wanted and have it actually spit out something that did what I wanted. I was like, Oh, dang. All right.


[00:18:44.27]
Chris: I think the important part of that, though, is the part at the end where you said the actual amount of Python had to write was almost nil.


[00:18:52.08]
Ned: Yes.


[00:18:53.04]
Chris: Because I'm not going to throw too much shade at you, not more than normal, but you're not a programmer per se. No. How much of this would you be able to write by yourself?


[00:19:04.29]
Ned: I would eventually be able to produce it myself, but it would have taken much longer because I'm not familiar with all the modules that are available, and my grasp of Python syntax is not great. I can read it really well, but actually having to write something from scratch, that's where the challenge is. Actually, you lost me.


[00:19:22.17]
Chris: Did you imply that Python has a syntax?


[00:19:26.10]
Ned: Careful now. Yeah, the fact that it was able to just spit out the code, and then I had to deal with dependencies. God damn it, fucking dependencies. Anyway. We want a way for AIs to snap into things. The way that we had done that in the past is through APIs. When you want a program, when you want your application to interact with another program or service, that program or service typically has a front-facing API or application programming interface. This is a well-defined, hopefully, set of endpoints, actions and parameters that you can use to tell the program to go do something. Windows, if you're using Windows, or Mac, if you're using Mac, it has an API. Well, actually, several APIs, so that anyone writing an app for Windows can program against that API. If I want to draw a window on the screen, or I want to write a file to the file system, there's APIs to do those actions. Programming languages have libraries and software development kits that take the API and implement them in the programming language so you don't have to. You just import that module into your Python, let's say, and then use it to write the contents of something to a file using the I/O module.


[00:21:02.27]
Ned: The underlying code for the module will interact with the Windows file system API to make the necessary calls to access the file and write contents to it. One of The big revolutions in web development, really the birth of Web 2. 0, was the creation of what's called the RESTful API, which stands for representational state transfer, which is Something I did not know until today. I knew it stood for something, but I never looked it up before. Why would you?


[00:21:38.10]
Chris: I just assumed that it was because the APIs were very sleepy.


[00:21:43.00]
Ned: They're chilling, man. They don't want to get up all in your grill. Rest APIs use standard HTTP verbs and endpoints to make requests and receive responses. There were ways to do this prior to REST. One of the bigger ones was called SOAP that used XML, and it was a fucking nightmare, which is why no one uses it anymore. Rest made things really easy. It uses JavaScript object notation or JSON in the body of the request and in the response to package up data and parameters and whatever the returned values are. When you want to interact with a RESTful API, it's very easy. You create an HTTP request, let's say a get_action, which is a request for information from a web server endpoint. The endpoint will be the thing that you get information about as defined by the API, and the body of your request might have some query parameters that refine the response that you receive. Everything in REST is using JSON and HTTP, which are well understood, and they're also allowed on most networks, so you didn't have open up weird ports or anything. It's all happening over 80 and 443. And once services like Twitter, Slack, PayPal, Google Maps, and a bazillion others started using RESTful APIs, it made it that much easier for your application to integrate with theirs.


[00:23:18.08]
Ned: You want to accept payments on your website? Maybe you want to give Chaos Lever money? That'd be great. We could integrate with the Stripe API and take their little code block, plop it on our website, and boom, now we can accept your payments. Send us all your money. Do you want to be able to give directions to your swanky new bakery, which I will open when I retire from all the money that you give me? I can add a Google Maps widget that talks to their API and request directions. This is what made Web 2. 0 explode. It was the rise of these RESTful APIs across all kinds of services and also for communication between services. But that's a conversation for another day if we ever want to get into microservices architectures. The reason I bring this up is because MCP is meant to be for AI what REST was for 2. 0, a standardized protocol that lets clients make requests against servers, except in this case, it's LLMs making the requests. Makes sense?


[00:24:27.16]
Chris: Yeah, they're still clients. They're just little baby Skynets.


[00:24:33.10]
Ned: Real chatty Skynets. I always got the impression that Skynet was very monosyllabic. It would hand down an edict, kill all humans, and that was it. It didn't want to have a conversation about it.


[00:24:46.25]
Chris: Maybe more of a how 9,000 situation.


[00:24:50.03]
Ned: Indeed. The Model Contact Protocol. The core idea is that LLMs need a way to access tools outside side of their core functionality that allows them to get things done. They also need a way to discover those tools and figure out which tool to use for a given job. This is what MCP can help with. The MCP founders themselves describe MCP as a USB-C port for LLMs, by which they mean USB creates a standard way to connect peripherals to your computer, and MCP provides a standard way to connect tools to your LLMs. It's a tortured metaphor, but it works. I'll allow it.


[00:25:35.13]
Chris: Is torture part of the protocol?


[00:25:37.17]
Ned: Can be. That could be one of the tools that's supported. The general architecture is that of an MCP client that is leveraging an LLM, and then one or more MCP servers that it's configured to talk to. The protocol used to have that conversation is the MCP protocol, and the MCP The servers can be located on your local machine where you're running this LLM client, or they could be hosted by some Internet-facing service. The MCP protocol itself has a few layers. The protocol layer itself handles the high-level communication patterns like requests and responses. Then the transport layer handles shuttling that protocol-level communication between endpoints. It can use either standard standard in and standard out if you have a local MCP server, or it will use HTTP for remote MCP servers. It's not introducing any wild new transport technology. All very well-understood stuff. The MCP servers can offer resources, prompts, and tools. Resources are basically data sources that the LLM might want to access. Could be database records, log files, API responses, etc. Like, Google weather might offer weather information about your location using a resource template that's defined by the server. The LLM can request to that.


[00:27:14.21]
Ned: Prompt The prompts are templates that the client can use to interact with the MCP server. If the data source is weather information, the prompt describes how the MCP client could craft a prompt for the MCP server to get weather information back. It's like a menu with some fill in the blanks. Another example is if the MCP server is a code analysis tool, it might have a prompt that the LLM can use to check for security issues in some go code. The client can ask the MCP server to list out all the prompts so it can find the one that most closely resembles what it's trying to accomplish. Same thing with data sources. There's a list command that says, Give me all your data sources. The last one is tools, and this is probably the biggest thing as they provide executional functionality, which sounds grim, but it provides a way to execute things by the MCP client, by which I I mean, the client can affect real change by using tools provided by the MCP server. As an example, there is an MCP server out there for GitHub, and it has all the tools that lets you do things like create create a file, create a repository, generate a pull request, list out existing issues.


[00:28:35.13]
Ned: There's a ton more. I'm not going to go through the full list, but there is a command to list all of them out. Now, the idea of Copilot being able to check in my code to a GitHub repository without me having to do it is totally possible if Copilot gets wired into the GitHub MCP server. That server could be running locally. It's available as a so I could spin it up locally and give it my GitHub credentials, or, assuming at some point, GitHub is going to provide this available as a hosted service. That's it. That's an MCP. What do you think, Chris?


[00:29:16.16]
Chris: I actually thought there was going to be more to it, but it makes sense because it's not really a thing. It's the plumbing to make a thing, which is different.


[00:29:26.27]
Ned: It basically defines the expected inputs and outputs for these different types. The data sources, the tools, and the prompts all have some expected implementations that if you're writing an MCP server, these are the requests you're going to get, and these are the fields that you should respond with.


[00:29:50.20]
Chris: Yeah, and if you're going to make an automated system, the agentic AI, something that runs with any measure of autonomy, that stuff's got to be bulletproof.


[00:30:01.22]
Ned: Yeah, and that's where I have some reservations, hesitations. Because this all feels very loosy-goosy to me. The reason I say is, when you're writing a program in Python or Go, and you need to interface with something else, the whole process is very well defined and it's deterministic. I ask for some files contents, and that API gives me the contents, and then I do something with the contents. The API of the file system determines how I have to ask and the data structure of what I get back. My program doesn't try to reason out what to do with it. I tell it what to do. You got this data. Now go use it over here. Mcp and agentic AI feels like I'm losing control. Not entirely comfortable with Fair.


[00:31:02.00]
Chris: It also feels like, I mean, maybe we need to know more about agentic AI and the actual implementations to answer this question, but it feels like there should be a way to lock that down a little bit more. You have the opportunity, let's go from zero to five. Zero being you don't do anything, five being you have absolute complete authority to do whatever you want. Whenever I run Job X, I'm going to run it at level one, and I'm going to run Job Y at level four.


[00:31:31.02]
Ned: I think part of that is how you create the MCP client side of things. That's not necessarily something you and I would program. These MCP clients are going to be prebuilt for us, for the most part. If you are running, what's the name of that desktop thing?


[00:31:52.22]
Chris: Terry.


[00:31:53.17]
Ned: Yes. If you're running Terry. Good Lord. Anthrop Anthropic. Anthropic has a desktop client. If you're running the Anthropic desktop client today, you can snap in MCP servers to it. That's just a thing that you can add. But the wiring on the backends for the MCP client is something they've already done for you. I feel like the same thing is going to happen with Copilot and ChatGPT and all these other things is you can tell it, I want to interact with these different servers, but it's going to handle the actual wiring. They're going to have to provide some dials to you that control, like you said, the level of autonomy you want to give that particular client. When I ask a Google Home to play the latest Carly Ray Jepson album, It's okay if it gets it wrong. I mean, it's not okay. But worst case, I have to pull out my phone and do it myself. When a program is undergirding a million dollar business at a massive conglomerate, a failure of communication or sudden surprises due to the non-deterministic nature of AI? It seems bad. Yesterday, it created all the files I wanted in S3, and today, it picked the wrong MCP server and tried to overwrite the entire production S3 bucket.


[00:33:18.04]
Ned: Oopsy dooz. For any of these AI agents to be successful, they need to build trust with us and succeed more often than they fail. This can't be a Tesla full self-driving situation. I don't need my agentic AI trying to kill me or mow down a small child every 20 minutes. That probably means, like you said, Chris, keeping a human in the loop and having a manual override for when it's going to take potentially destructive actions. To bring it all the way back, I'm pretty okay having my AI agent stuck to well-defined tracks. I don't trust it on the open road.


[00:34:04.12]
Chris: Totally fair. Plus, trying to mow down small children, that's our thing.


[00:34:08.22]
Ned: God damn it. Absolutely. The other thing I kept wondering is, how is this better? The challenge I've had with most AI assistants is that they're so buggy and unpredictable that it's usually faster and safer to just do it myself. It feels It feels a little like that indoor train. It's shiny, I want it, but I also know that it's wildly impractical and inefficient.


[00:34:40.09]
Chris: What I'm hearing is six weeks from now, you're going to have both agentic AI and an internal train in your house. Shut up.


[00:34:50.13]
Ned: Hey, thanks for listening or something. I guess you found it worthwhile enough if you made it all the way to the end. So congratulations to you, friend. You accomplished something today. Now you can go sit on the couch, dial up your and tell it to make you a hot dog. You've earned it. You can find more about the show by visiting our LinkedIn page, just search Chaos Lever, or go to our website, chaoslever. Com, where you'll find show notes, blog posts, and general Tom foolery. We'll be back next week to see what fresh hell is upon us. Ta-ta for now. Yep, and they had to spell it D-I-Z-Z-K-N-E-E-L-A-N-D because Disney.


[00:35:35.16]
Chris: They were going to get sued.


[00:35:38.16]
Ned: I had the CD. That was the only good song on it.


[00:35:44.24]
Chris: Yeah, it's a SR-71 situation. I had that one song, it's a banger. We don't need to talk about the rest of it ever.