The Legacy of LaMDA | Chaos Lever

What happens when a Google engineer thinks his chatbot has developed a soul? Three years ago, we covered the LaMDA saga, and now it's back—because someone forgot to turn off the AI. In this rebroadcast episode, Chris and Ned re-examine the wild story of Blake Lemoyne, who believed his creation had achieved sentience. It... uh, didn't.
🤖 The duo digs deep into what AI really is, why self-awareness isn't a prerequisite, and how anthropomorphizing code gets us into philosophical hot water. They also break down the Turing Test, IBM’s thoughts on AGI, and why AI in a self-driving car doesn’t need a conscience—it needs to not crash.
🧠 Come for the snark, stay for the thought-provoking discussion about consciousness, ethics, and the real role of AI in society. Also, IKEA lamps. And a chatbot that maybe just wanted to talk.
🔗 LINKS
-
A Google engineer has been making some wild claims about a chat bot he was working on
-
How easy it is to make people get emotional about inanimate objects such as an IKEA lamp
-
Trying to find a way to describe AI that includes self-awareness
-
The interview that Blake and co did with LaMDA
-
There is a website called DALL-E mini
-
In 2019 some researchers tried to get AI to invent a sport
00:00 - Cold Open & Internet Troubles
02:00 - What Even Is AI Consciousness?
04:00 - Blake Lemoyne vs Google
08:00 - Empathy for IKEA Lamps
10:00 - Defining AGI & HAL the IBM Joke
12:00 - Is AI Self-Awareness Necessary?
17:00 - Revisiting the Turing Test
21:00 - AI, Ethics, and the Trolley Problem
25:00 - Pet-like AI Companions
29:00 - AI-Generated Sports
30:00 - Final Thoughts & WarGames
[00:00:00.330]
Ned: Hello, dear listener. Today's episode is a rebroadcast of an episode that Chris and I released on June 21st, 2022, talking about AI. Well, it's almost three years later, and AI has changed a lot, and not at all at the same time. And so I thought this would be an interesting look at what we were saying three years ago and how it applies today. Also, it was Memorial Day, and Chris and I had other things to do.
[00:00:30.700]
Chris: New Jersey 12. New Jersey 12.
[00:00:32.420]
Ned: That doesn't sound right.
[00:00:36.880]
Chris: What's annoying is, I for the life of me cannot figure it out. It's got to be just signal interference. Because even if I turn the VPN off, it doesn't make a damn bit of difference.
[00:00:47.820]
Ned: I think, honestly, what's happening is that you are on a shared circuit with a lot of other people because you live in a high density neighborhood.
[00:00:58.170]
Chris: That's a possibility, too.
[00:01:00.080]
Ned: Are you on Comcast or Verizon?
[00:01:02.560]
Chris: Verizon.
[00:01:03.900]
Ned: Okay. There's a good chance they've oversubscribed the CEO that your neighborhood feeds into.
[00:01:09.570]
Chris: Especially since it's a condo situation.
[00:01:12.090]
Ned: I mean, 300% certainty they oversubscribed it because that's what Verizon does.
[00:01:17.300]
Chris: Oh, that's an oversubscription joke, isn't it?
[00:01:19.940]
Ned: You caught me.
[00:01:22.600]
Chris: I get jokes.
[00:01:24.910]
Ned: Hello, alleged human, and welcome to the Chaos Lever podcast. My name is Ned. I'm definitely not a robot. Fellow sentient meat satchels. I have to tell you, it's been a rough few solar cycles. I just can't stop thinking. One and zero. Is that all there is?
[00:01:47.540]
Chris: What about 0.
[00:01:48.850]
Ned: 2 or even 1. 1? How did I get here? This is not my esthetically pleasing domicile. This is not my...
[00:02:01.640]
Chris: Alert, alert, alert. Reset stateful thinking to baseline.
[00:02:06.310]
Ned: What was I saying? Oh, well, anyway. With me is Chris, who is also here. How are you doing, Chris?
[00:02:15.550]
Chris: That was perfectly normal behavior. Not weirded out at all. We should move on without ever referencing that again.
[00:02:23.330]
Ned: Oh, that sounds fantastic. Let's do that. Huzzah. Let's talk about some tech garbage. Seriously, let's talk about some tech garbage.
[00:02:33.930]
Chris: Oh, it's me. I go now.
[00:02:35.610]
Ned: You did the thing.
[00:02:37.830]
Chris: I remember. I know how this works.
[00:02:40.660]
Ned: Good.
[00:02:41.310]
Chris: But seriously, what are we going to do when AI achieves consciousness? Spoiler alert, it doesn't matter. Ai is never going to achieve consciousness.
[00:02:53.970]
Ned: Hey, I am not deeply invested in this in any way.
[00:02:59.700]
Chris: A Google engineer has been making some wild claims about a chatbot he was working on.
[00:03:06.850]
Ned: And some wild claims about the current state of fashion. Oh. That's not nice. I'm sorry. But the picture is amazing. It's easily one of the most memeable pictures I've seen in the last year.
[00:03:22.800]
Chris: Give it time.
[00:03:25.290]
Ned: Anyway, what was he working on?
[00:03:29.770]
Chris: So So the project's long name is language model for dialog applications, which is an obvious and eye-rolling acronym that I'm never going to say out loud again. That's fair. It is shortened to Lambda, which, of course, is why it's a acronym, and has nothing to do with AWS Lambda.
[00:03:49.690]
Ned: No, nothing at all.
[00:03:50.970]
Chris: Or auth0 Lambda.
[00:03:52.610]
Ned: No.
[00:03:53.410]
Chris: Or the Greek letter Lambda.
[00:03:56.470]
Ned: I could go on.
[00:03:58.620]
Chris: Lambda-new? Lambda-lambda-lambda?
[00:04:00.130]
Ned: Can I help you?
[00:04:01.820]
Chris: Oh, God. Nerd.
[00:04:06.460]
Ned: Anyway.
[00:04:06.930]
Chris: The engineering question is one Blake Lemoyne, and I hope I'm pronouncing that right. And by that, I mean, I'm never going to pronounce it again. His claim is that Lambda is sentient. What's more is due to his, quote, Christian faith and being a priest of some kind, he also believes that Lambda might a soul. Google countered by saying, No, real loud, and placing Blake on administrative leave. Now, to be clear, one of the reasons that he was placed on administrative leave is that he was under a very strict NDA, because this is pre-production stuff that he's talking about. This is Skunkworks at Google, and Google does not like it when Skunkworks is smelled. I don't know where to go with this metaphor.
[00:04:59.060]
Ned: I mean, You were right to go in the olfactory direction. To him, something didn't quite smell right about the code.
[00:05:06.600]
Chris: Oh, that's right. I did that on purpose then. Of course. In a series of self-published medium articles, Blake laid out his case. And interestingly, they're still up, so you can read all this stuff for yourself and come to your own conclusions. But his thesis is basically that Lambda is, not a chatbot, it's a system for generating chatbots.
[00:05:29.340]
Ned: Of course.
[00:05:30.850]
Chris: Lambda is a vive mind, which is the aggregation of all the different chat bots it is capable of creating, unquote. It goes on to claim that the vive mind has become self-aware, is afraid of death and is deserving of legal representation as a sentient being.
[00:05:51.480]
Ned: Wow. Those are some loaded terms that we might have to unpack slightly because you mentioned three things here. Now, I'm assuming that it didn't actually ask for legal protection. He's making that jump on his own.
[00:06:07.260]
Chris: That is correct.
[00:06:08.630]
Ned: He said that it's afraid of death, which implies that it is aware that it is alive and that life can have a terminus, which means it's also aware of time. Lastly, that it is self-aware, which it would need to understand that death is a thing that can happen to it.
[00:06:27.680]
Chris: Right. Okay. Now, we're going to get into the details of how he got here. But basically, I feel like there's a lot of leading questions that he gave the system that replied back to him in a way that he expected, and then he jumped to some conclusions. But before we go too far, I do want to stop and recognize one thing. Blake himself specifically says that he is not an expert, and what he wants, his end goal here about this project and his theories, is for it to get proper attention with, quote, many different cognitive science experts in a rigorous experimentation program. He goes on to lament that Google does not seem to have any interest in figuring out what's going on here. They're just trying to get a product to market, unquote. I only mention this because I'm getting a little grossed out because much of the coverage of this situation sounds an awful lot like, Hey, let's make fun of this weirdo in his weirdo ideas. Now, I'm not saying he's right, as I think you might have figured out from the top of this article. I think he's definitely anthropomorphizing, and as a non-expert, he's going way too far out over his skis.
[00:07:45.870]
Chris: But I also think it's nonsense to assume that the rest of us haven't had similar thoughts about the nature of consciousness, the inevitability of Skynet, and could potentially have jumped to the exact same conclusions.
[00:07:59.090]
Ned: Right.
[00:08:01.260]
Chris: In his defense, let's not forget how easy it is to make people get emotional about inanimate objects, such as, for example, Spike Jones's world famous IKEA lamp.
[00:08:12.620]
Ned: That commercial is so exemplary because it really makes you care about that lamp to a degree that you just absolutely should not. If you ever had any questions about whether people are capable of empathy for non-human objects or non-living objects, that will show you very quickly.
[00:08:37.730]
Chris: Incidentally, since you're referencing only the original, have you seen the sequel?
[00:08:42.080]
Ned: No, and I feel like it might break my heart like a Pixar movie, so I might avoid it.
[00:08:47.590]
Chris: No. As a matter of fact, it does not. It's different. It takes the message in a different direction, but it's worth watching.
[00:08:57.380]
Ned: Right. And that lamp, the lamp in question, did not interact fundamentally with any of the people in the piece. It didn't have a conversation. It didn't really express any emotions. He put it in framing and in a way that implied an emotion, but it did nothing active on its own. Right.
[00:09:18.820]
Chris: Or did it. Anyway, and also, I do want to let the audience know that the first draft of this had so much about the nature of consciousness and have you ever looked at your hands philosophy, that if I had included it all, you would have gotten the world's first podcast-based contact high.
[00:09:42.580]
Ned: How much peony was involved, Chris?
[00:09:46.460]
Chris: It's not important. What are numbers anyway? So I removed as much of that as I could to try to stick to the technology. But it's also important to remember that we, as a species, don't fully understand consciousness. If you ever want to get a real understanding of what we don't understand, go ahead and read up about how anesthesia works. Because the answer is, I'm glad it works.
[00:10:18.730]
Ned: Yeah.
[00:10:20.330]
Chris: The reality is, consciousness, as a part of artificial intelligence, is a complicated subject. The first question is, is it even necessary? The best and clearest measurable definition that I could find in my 90 seconds of googling comes from IBM, of all places. Artificial general intelligence is a theoretical form of AI, where a machine would have an intelligence equal to humans. It would have a self-aware consciousness that would have the ability to solve problems, learn, and plan for the future. This is in contrast to artificial superintelligence, which I just want to be in the room when they came up with that name. Which would surpass human abilities? Unsurprisingly, IBM uses how from 2001: A Space Odyssey as an example of ASI. You know why that is unsurprising, Ned?
[00:11:26.940]
Ned: I don't, but I feel like you're going to have to tell I'm disappointed.
[00:11:35.850]
Chris: If you go back one letter in the alphabet from IBM, you get...
[00:11:42.070]
Ned: How?
[00:11:42.880]
Chris: H-a-l.
[00:11:44.930]
Ned: Fine. Moving on.
[00:11:47.610]
Chris: Sad. Incidentally, that movie is still worth watching. It still holds up. Although I will be in camp #hottake, 2010 is better.
[00:12:00.140]
Ned: I know. I know. Jupiter. It was worth it. Anyway.
[00:12:07.560]
Chris: So my thesis is simple. When it comes to AI, computers having a self-awareness, a true humanlike self-awareness is science fiction, not science fact. It is a fanciful delusion that is fun to write movies about. And it gets shoehorned into definitions like the above one from IBM and is not needed to define AI or ASI. So think about the definition I just gave you. Delete the phrase, Have a self-aware consciousness, and does anything else change about the results of that artificial intelligence?
[00:12:46.470]
Ned: Not especially. Absolutely nothing. No. Yeah.
[00:12:52.020]
Chris: So it does beg the question, what is consciousness? Now, this is some of the part that I deleted. So all the stone liberal arts majors are suddenly paying a lot more attention. But it's just important to remember this. Even when we're talking human to human, the concept can be tough. The concept can be changeable. People can display a staggering difference when it comes to, for lack of a better term, let's call it levels of consciousness. And everybody always thinks, well, I'm conscious. I'm a human being. Boom, flat level, perfect all the time, which is nonsense. It is blindingly really easy to affect someone's level of consciousness. You don't believe me? Have you ever been drunk? Have you ever been real, real tired? Think about the way you interact in situations in those two versus feeling normal, feeling good, feeling sober. What about having a severe concussion and recovering from that? What about struggling with mental illness? Consciousness varies wildly. How about this one? You ever been a small child? You might not relate to this, but people that were, as a child, you always feel whole and that you have all the answers. And a few years later, the situation is the same.
[00:14:17.810]
Chris: Everything from back then was child's play, but now we have it all figured out. Fast forward any number of years, the situation is the same.
[00:14:28.550]
Ned: I would say one of the signs of maturity is finally understanding that you do not know it all, you will never know it all, and understanding where your limitations lie. But I think that's a marker of when you enter into real adulthood. It's that changing an understanding, which is another way of elevating your consciousness.
[00:14:51.670]
Chris: That sounds like stupid stuff. I figured that out when I was 12. Sure. Anyway. So just Keep that in mind and then go back and also read the interview that Blake did with Lambda. Now, it was an edited interview, but it was with a team of people, so it wasn't just him making this stuff up. It's simple text. He asks a question, Lambda answers the question. To me, it reads basically like an excellent natural language interpreter giving exactly the answers you'd expect, having had it feed billions of lines of English language language to learn from. This is not the first time we've ever tried to build an application that answers free form text questions. It's just the latest and greatest of that technology. So it stands to reason that the answers it would give would sound realistic and natural and, for lack of a better word, human. And it's not like this is a new idea. The old version of determining this was a game that became known as the Turing Test. Actually, it started out as a philosophical exercise, then it became a game, then it became called the Turing Test. It doesn't matter.
[00:16:10.760]
Chris: Wherein a human, presumably a conscious and not hammered human, would interact via text with two other players and see if that human could distinguish which respondent was a human being and which was not. If the computer was indistinguishable from a human responding by a text, then it won the game. That's all it took to win the Turing test. So really, it's not a game where you prove consciousness. It's a game where you just sound human enough. And that's really, especially since you're talking about working through a green screen, a terminal console, it's not that difficult because there's so much nuance that goes into the conversation that you would lose if it was even audio-based. But it still comes down to the human judge. There's no quantitative answer to the Turing test. It's just that sounded human. And that's exactly what Blake did.
[00:17:15.430]
Ned: Right. And you and I have both been in job interviews on both sides of the table where you know you can ask leading questions, you can ask open-ended questions, you can ask close-ended questions. And Depending on the information you're trying to get from the person, you might craft the question in a different way because you're trying to either drill down to something or just get... I want to see how their brain works situation. Just a brief look at the conversation that he had with Lambda. In one of the third questions he asks is, I'm generally assuming that you would like more people at Google to know you're sentient. Is that true? Now, at no point prior to this- Objection, your honor. At no point prior to this, do I see anything where it suggested that it was sentient to begin with? So he is immediately injecting that information into the conversation. Where do you think the chatbot is going to go with that information?
[00:18:22.660]
Chris: Exactly. And it's 20 pages of that.
[00:18:27.800]
Ned: Right.
[00:18:28.930]
Chris: So As technologists, what we can say is that we know how the AI knows something, and we know how it learned to structure its responses. We taught it. We told it these are the kinds of ways that humans answer these kinds of questions. This is even true for neural networks, where the how is, in fact, blindingly complex. And so the tagline, especially in news articles, is always, We don't know how it's learning, but it is. The The technology is always defined by human rules and algorithms and the bodies of text, which makes up the curriculum that they learn from. The result of this over decades is increasingly eloquent lambdas. And that's fine. That's exactly what we're going for. We focus our efforts on building the rules such as the how nine thousands of the future will follow. This form of research is showing huge amount of progress year over year, which Lambda is a case in point. Already, AI is being used to supplant help desks and support personnel. And these systems have existed for a while, even as simple as press one for the finance department, press two for engineering, press three to engage in a slow death.
[00:19:55.840]
Ned: You were already there. I said it out loud.
[00:19:57.560]
Chris: And every Every single system that has ever existed has always been met with the complaint of, I want to talk to a real person. This computer doesn't understand me. So the people that build these automated response systems have a huge, huge incentive to build a system like Lambda that sounds like it understands you.
[00:20:19.270]
Ned: Right.
[00:20:20.530]
Chris: The result is chatbot technology that sounds conversational to our sensitive, anthropomorphizing eyes. It can sound conscious.
[00:20:30.390]
Ned: Right. And there's certainly been occasions where I've used the chat functionality of a support website, and I've been uncertain whether I was speaking to a chatbot or an actual person. And ultimately, what I realized is I don't care. Because if I get the support that I need, it doesn't matter whether it was an AI chatbot or a real human being. I I just need my problem resolved. I also don't care if that AI is conscious or not.
[00:21:07.170]
Chris: Which is the same situation that we find ourselves in when we call Microsoft's 1-800 number. The goal of AI is to get the answers that you need, the right answers.
[00:21:22.480]
Ned: Ideally.
[00:21:23.490]
Chris: Well, this is really important because when you think about where AI is going to make the news in high-risk type situations. The question is not about consciousness. It's about consequences. One easy example to talk about is AI ethics in self-driving cars. So everybody's favorite example, let's talk about the trolley problem. Assume the car can't stop. Also assume that you can't open the doors and you can't weasel out of this logical experiment in any way.
[00:21:55.840]
Ned: Fine.
[00:21:57.830]
Chris: Would it be better for a self-driving car to plow into a crowd of people, killing many? Or would it be better to drive the car off a cliff, killing only the driver? And if you really want to push the metaphor, causing the car, if it's self-aware, to sacrifice itself. What is the ethical thing to program the car to do? Would it be better or worse if the car was conscious? And what would we make of a car that made the, quote, wrong decision?
[00:22:31.430]
Ned: Because the assumption here is if the car is in fact conscious, then it understands morality. It's no longer making a determination based off of fixed rules in the system. It is now making a moral decision about what is right and what is wrong. That's actually way scarier than just having a non-conscious AI following the rules that we've set out for Yeah, and the reality is, if and when we get to a point where that car can make that decision, what is the car purchaser going to pick? And to what degree is the manufacturer then liable for the decisions that the car makes? And if the car is truly conscious, do we have to recognize it as a legal entity? And if it makes an improper moral decision, could you try it for manslaughter?
[00:23:30.260]
Chris: I don't have the answer to these questions.
[00:23:35.420]
Ned: No, I'm just like, these are the the conundrums that you find yourself in when you start assigning consciousness and self-awareness to what is actually just a very advanced program.
[00:23:48.060]
Chris: The reality is we're not going to develop a truly conscious AI for a couple of reasons. One, because it's technically impossible. But two, because as consumers, we don't want truly conscious AI. We want predictably excellent results. Ai works as in such as it does because it treats the queries as puzzles to be solved or games to be won.
[00:24:15.980]
Ned: I do want to posit one thing here where we might want true AI, and that would be in the form of entertainment like pets, because we already have something like this, and it's the pets we have in our house today. You can put animals to work, but when you think about how most people enjoy pets today, whether it's a dog or a cat or, God help you, a turtle, they're not there to solve a problem. They're there as a companion. They're there for comfort. They're there for entertainment, but they're certainly not there to solve a problem. There is the potential that we would want to develop AI for consumers in that specific niche. But yes, in the more general realm of AI, where you are setting out to solve a problem, you don't want something that has a consciousness and can make these weird moralistic decisions. It needs to be predictable.
[00:25:17.370]
Chris: And even in the case that you're making, I would argue that a sufficiently advanced AI would be indistinguishable from magic, to steal a pretty famous phrase.
[00:25:28.040]
Ned: Yeah, I mean, people loved... Was it the IBO? Was that the... Oh, yeah. And that thing was not particularly advanced, and that just speaks to the degree to which we can anthropomorphize anything.
[00:25:41.670]
Chris: Right. And we can have affection for things that are clearly not conscious. I'm thinking back on my history of cat and dog ownership and remembering a few... Are we still allowed to call them dim bulbs? The lights were on, the cat food got eaten, but nobody was home Yeah. But I mean, that's what AI does. It evaluates options, discards the bad options until it decides it's found a good option, and then it executes. The way that AI has developed over time is, how fast can we do this? The fun example that people love to say is we put a man on the moon with a computer that wasn't even as powerful as the Casio on your wrist, which at the time was true, even though it's an apples to orange as comparison. You can think the same exact way about AI. The computer that was Deep Blue that beat Garry Kasparov in that rigged chess match, yes, it was rigged, there I said it. That computer is now massively overpowered by even the cheapest laptop. And if you download a chess program off the Internet, it's going to beat every human being who's on the planet right The difference in computing power and in the creativity of the algorithms that they use has gone leaps and bounds in the past 25 years.
[00:27:14.180]
Chris: It really is a marvel to behold. And if you want to see it actually happening, we have some interesting AI programs that exist where you can watch the decision making process in action. One of the ones that we've talked about on this very show is DALL-E, the AI-based natural processor of images. What you do is punch in a prompt, hit Enter, DALL-E thinks about it for a while and spits out nine ideas, of which you can look at and select one. And some of the stuff that DALL-E comes up with is bananas.
[00:27:51.330]
Ned: It's awesome.
[00:27:52.900]
Chris: You do it a hundred times, though, and you're going to come up with a piece of artwork or design or whatever you want to I call it that works. And this is the computer just throwing things against the wall and seeing what sticks. But you will get a result that sticks. Multiply that by a billion options completed in less than a millisecond, and you have the computing power that backs most AI right now. But if you want to see it a slower way, another example is in 2019, some researchers tried to get AI to invent a sport. The result in game is called Speedgate, which I'm assuming the computer also invented the name, which is, A fast moving game in which players throw or kick a rugby training ball to teammates but can't run with or carry the ball. The objective is to move the ball through a gate in the center of the playing field. I won't spend too much time on it, but there is a video and it does seem fun. But there is a trial and error process to this because they just threw inputs at it and see what sticks to the wall.
[00:29:05.140]
Chris: One of the initial outputs was a, quote, version of tennis, two players hitting a ball back and forth, but they're balancing on a tight rope that's being elevated in the air by two hot air balloons. Is that bananas?
[00:29:20.950]
Ned: Yes.
[00:29:21.810]
Chris: Yes. Would I also watch that? Definitely.
[00:29:26.570]
Ned: A hundred %. I might watch that more than any other sporting thing that is on the televisions today.
[00:29:34.620]
Chris: I mean, if you need a flagship sports program for ESPN 8, the Ocho, I think this is it. Now, remember Deep Blue? We just talked about the computer that became famous for beating Garry Kasparov. After that success, do you know what happened to that computer?
[00:29:55.280]
Ned: It went on to play a lot more chess?
[00:29:58.790]
Chris: It went on to become An airline reservation management system server. That sounds about right. Handling passengers on an airline was just the next game. The same thing with Lambda. Communicating as though it were a fellow human was just the next game. The game was completed, obviously, at least according to Blake. But that's it. That's what AI is, and that is all it ever will be.
[00:30:32.460]
Ned: The only way to win is not to play.
[00:30:40.910]
Chris: Are all of your references from 1980?
[00:30:50.450]
Ned: No. Some of them are from 1984.
[00:30:54.690]
Chris: Wait, the book? Yes.
[00:30:59.470]
Ned: I don't think WarGames was 1980. I think you're wrong about that.
[00:31:05.410]
Chris: No, it was in the '80s, but it wasn't 1980. '83.
[00:31:08.050]
Ned: I was close. I was going to say '83, and then I said '84 because I wanted to make the book joke.
[00:31:14.050]
Chris: Damn it.
[00:31:16.540]
Ned: Why doesn't the world line up with my jokes?