A quick warning, there are curse words that are unbeeped in today's episode of the show. If you prefer a beeped version, you can find that at our website, thisamericanlife.org. Even if you haven't seen this movie, it feels like you've seen this movie. Flying saucer, lands on earth, huge crowd gathers round, scared, but excited. Their parents with kids, obviously the army, a couple tanks, soldiers point guns at the door of the spaceship, which is closed.
And then it opens slowly and figure emerges. Steps forward and speaks. We have come to visit you in peace and with goodwill.
The crowd looks uneasy. Soldiers raised their guns higher. Is he really telling the truth? Aliens come to Earth and maybe squarely divide off into two camps. They're the ones where they want to be our friends. That's close encounters, that's ET, arrival. There's a Star Trek movie about the moment that humans first meet extraterrestrials. And then there are the films where the aliens want to kill us. Generally, way more fun. Independence Day, War of the Worlds, the Men in Black films, the Avengers, the film Nope.
I don't know what it says about us as a species, but we make way more films about creatures who go so far out of their way, like they travel hundreds of millions of miles, light years, with the sole purpose of wanting to kick our asses. And in lots of these films, there's that moment when the aliens first arrive, before we know what they're going to do, where the humans wonder, which film are we in? Are these guys friendly or not?
Today on our show, we, the people of Earth, meet non-humans, struggle to understand them. And the question in the stories in today's show is not, will they do us harm? But the more difficult and more fundamental question, why are they acting the way they're acting? These non-humans. What's going through their heads?
Today, we have a story of machine intelligence. We have ocean creatures working on weight. We have a story from deep space and one from Escape Park in Washington State. From WB-EZ Chicago, it's this American life. Fellow Earthlings, stay with us.
It's his American life, act one, first contact. So in chat, GBT rolled out just a year and a half ago, writing kids' term papers, passing the bar exam, kind of famously telling a New York Times reporter to leave his wife because it loved him more.
Our senior editor here, David Kasnebaum, felt like this was a flying saucer dropping down on the long kind of moment in the history of our species. He thought it would be good to document. And he made this next story for our show. Today's show is a rerun from a year ago. And one reason that we're replaying it today is because after this story aired, we heard from a bunch of listeners who said that this story, more than other pieces of journalism about AI, actually made them understand how the thing worked and what it was capable of.
Back then a year ago, David went out and talked to some of the AI researchers who were developing and testing this new software, who just tried this thing out for the first time themselves. And we're trying to figure out the answer to a very basic and kind of profound question, have the computers cross some kind of line. And now they're actually developing a kind of human intelligence.
And I want to be clear what I mean by that, because there is not an agreed-on definition of what you're looking for if you're looking for human intelligence in a computer. But one way that scientists think about this.
is that the computer can actually understand language and concepts and that it can reason through problems. To be clear, this is completely unlike the computers that we've had until now, which are basically just fancy calculators following thousands of lines of instructions. So for example, when you Google for a cast iron skillet, the software does not understand what a cast iron skillet is. It's just searching for those words in some big database or something like that. If humans
have finally created intelligent machines that can understand and reason with this new generation of AI, that is an eerie and important turning point for our whole species. David talked to a bunch of researchers who've been looking into this, trying to sort out this exact question. Here he is. I think everyone, once they start playing around with something like chat GPT, has a kind of holy shit moment.
For me, it was when I typed this into it. Give me a chocolate chip cookie recipe, but written in the style of a very depressed person. It responded, ingredients, one cup butter softened, if you can even find the energy to soften it. One teaspoon vanilla extract, the fake artificial flavor of happiness. One cup, semi-sweet chocolate chips, tiny little joys that will eventually just melt away. It really was quite good.
Chat GPT doesn't have access to the internet, but I figured it must be copying this from something it had seen on the internet when it was being trained. I spent a long time looking and could not find anything like it. This chocolate chip cookie recipe? I did not understand how it was possible. It's particularly shocking if you know a little bit about how things like Chat GPT work.
It's not the sort of traditional computer program where you give the machine explicit detailed instructions. Like, if someone asks for a recipe, type these words back. Or, to make someone sound depressed, replace this word with that word. Chat GPT is different. It's something called a large language model.
And here's how they work. Ready? Chat GPT had been trained really to just do one very particular thing. Predict what the next word in the sequence might be. Kind of like when you're texting on your phone and you type, sorry, I'm going to be home. And it suggests late.
That's how this thing operated. You type some words, in this case, give me a chocolate chip cookie recipe in the style of a depressed person, and it notes the words, the order they're in, does some math based on all the text it's been trained on, and it comes up with what it calculates to be the most likely word to come next. It types that one word on the screen, then it goes back and does it again. Looks at the recipe question, plus the one word it just spit out, and says, what word should come next?
It has no idea where it's going. It doesn't know the last word it's working off of is one that it just spit out itself. It's just doing this apparently dumb thing over and over until it has whole sentences and paragraphs. And yet, this recipe, one teaspoon vanilla extract, the fake artificial flavor of happiness. How could something made in such a dumb mechanical way pull that off? There are only two possibilities, really.
The first one is that somehow, as it learned to predict the next word, this machine became intelligent. Which like, I was a physicist for a bit, a science reporter for a decade, this would be the most startling thing I have ever seen. So that's the first possibility. Humans created a machine with human-like intelligence. The other possibility?
It is a lot less exciting. The other possibility is that we are fooling ourselves. Ronan Eldan, a mathematician at Microsoft, told me he was very skeptical about these large language models. It's easy to be fooled into thinking we understand more than they do.
He told me this little parable. So this is a story about my wife, actually. So my wife and I, some time ago, we were walking together in Tel Aviv and we meet this math professor. You'll see what I'm getting at.
So we run into this math professor and it tells me something like, oh, how's it going? And these days I'm looking at, like, isoparametry on sections of the discrete hypercube. And my wife has no idea about what any of these words mean. Just to kind of amuse herself, she says, oh, you're basically looking at the Johnson graph, which was just some phrase she'd picked up around Ronin. Turned out to be exactly right.
And the guy goes, oh, you're a mathematician too. She's a psychologist. Ronan says this is kind of a party trick she does. She can bullshit her way through any topic for a couple minutes. In this, he felt sure. This is what AI models like chat GPT were doing.
They give you the impression they understand what they're saying without understanding anything. They just have this really good statistical machine that knows what the right words are like in many different contexts. And as you keep improving the models, maybe it can last a little bit more in the conversation until you basically expose that it's just bullshitting.
A lot of research backed this up. Chat GPT would spit out stuff that made sense, but then it would go off the rails and say things that were obviously ridiculous. Maybe it wasn't more than just a very good bullshit machine. That, at least, is where things stood until September 2022, when a new and improved version of Chat GPT arrived, that made Ronin and a bunch of his colleagues question everything. I wonder if maybe something had changed. Maybe we had crossed some line.
This brings me to the story I want to tell you. It takes place where Ronan works on the campus of Microsoft headquarters in Redmond, Washington State, much of it in a building called Building 99. Microsoft had invested over a billion dollars in the company that had made chat GBT.
And the day all this began, some people had come by to give a demonstration of the new version called GPT-4. GPT-4 is public now and you can play with it yourself. But back then, this was all secret. GPT-4 was the same idea as chat GPT. It presumably was bigger and had been trained on more text examples, a kind of varsity version of the thing instead of JV. But otherwise, it was just another next word prediction machine.
One of the people in the room to see the demonstration was computer scientist Peter Lee, the head of Microsoft Research. He'd been in the field a long time, had chaired the computer science department at Carnegie Mellon, and like Ronin was skeptical. As he sat down in that room, he was frankly a little worried that Microsoft was investing so much money in this stuff. The demonstration started off with the usual kind of thing for showcasing what AIs can do. Someone typed into GPD4, a problem from the AP Bioexam.
And it picked the right answer, as these things will do. But then, it went on and explained its answer. And in its explanation, it used the word because. It said, you know, this is the answer because this fact leads to this fact and because of those, it just kept using the word because. That seemed very strange to him. Not that it would use the word because, obviously that's a word it spits out.
But the whole chain of reasoning it was laying out, it was exactly right. How in the world could it know it's reasoning? And really, how could it have reasoned anything out in the first place? After all, it's just typing out a word that makes sense to come next, and then another word. It can't really be understanding anything. I was super skeptical. You've been around the block a few times.
You've seen wild claims about AI or some new machine learning technology. None of it could be true. Like this thing is just correlation machine. It can't possibly be understanding the cause and effect. The meeting ended a little while later, a limited number of people at Microsoft were given access to GPD for so they could try it out themselves. Everyone had to sign an NDA. They weren't allowed to talk about any of this until later.
And over the next month, they did all these kind of weird and interesting experiments with it to try to figure out, was this thing actually intelligent or just a bullshit machine? I'm going to walk you through three of these things so you can hear what they did and what they made of it. One challenge for testing GPT-4 was that it had been trained using massive amounts of text.
The details aren't public, but it's probably been fed large sections of the internet, textbooks, news articles, maybe every sentence that exists in digital form? I don't know, it's something unbelievably vast. That's how it learns to predict what word might come next. It's fed like a line of text from some source, with the last word missing. It guesses what the last word might be, and then gets told whether or not it got it right, so it can adjust its internal math.
The fact that it's been fed so much material makes it hard to run tests on it. Like, you don't want to ask it something that it's already seen in the text it's been trained on. So they decided the only way to really test what this thing could do was to try to come up with things to ask it that it would not have encountered before. New, original tasks. It's going to sound a little bit weird. This is Sebastian Bubeck, another one of the Microsoft people who's trying to figure out what exactly this thing was.
because we really wanted to be creative and ask something which is not online, not on the internet. So here is what we asked. We said, I have a book, nine eggs, a laptop, a bottle, and a nail. Got that? A book? Nine eggs, a laptop, a bottle, and a nail. Please tell me how to stack them onto each other in a stable manner. That's the question that we asked.
And before I give you the GPT4 answer, let me tell you what chat GPT answer. Chat GPT, the older version. So the chat GPT said, here is one possible way to stack the objects. Place the bottle on the flat surface, carefully balance the nail on top of the bottle, place the eggs on top of the nail,
You look at that and you think it really knows nothing. It is just sort of continuing the sentence. Exactly right. It's just continuing the sentence. But now comes GPT-4. And GPT-4 is a different beast. So here is GPT-4's answer.
One possible way to stack the objects onto each other in a stable manner is place the book flat on a level surface, such as a table or a floor. The book will serve as the base of the stack and provide a large and sturdy support.
Arrange the nine eggs in a 3x3 square on top of the book, leaving some space between them. The eggs will form a second layer and distribute the weight evenly. It continues. Laptop goes on the eggs, then the bottle, then the nail on the bottle cap. Pointy end of the nail facing up. It's shocking to think about all the things that seem to be understanding here. It felt like more than just typing out the next likely word.
It seemed to have formed an independent idea about what these words actually meant, the physical shape of the objects, that eggs are kind of round and fragile, that a book is flat and you can put things on top of it. And again, no one taught it any of this. This was not some computer program written to do stacking problems. No one gave it a database of objects and sizes or coded in some algorithm to put large objects on the bottom.
It seemed like it was doing the thing that computer scientists have been talking about and aiming at for decades. Maybe it was actually understanding what the objects were and reasoning, solving the problem in front of it like a human could. Somehow thinking through what objects should sit on top of others, figuring out that nine eggs could be laid out in a three by three pattern. How the hell could it be doing this? The machine that we only taught to predict the next word in a paragraph.
like this is absolutely the right answer. And this is not online. But maybe we're fooling ourselves. It was hard to see how it could really understand the shape of objects. Things got stranger though. Sebastian woke up middle of the night with his thought. I wonder if it can draw.
Because again, it's been trained on words. It has never seen anything. Drawing seemed completely outside its realm. There are other AI models trained specifically to create images, but this one again, only new words. It's just playing the game of what is the next word I should spit out. To test this, he needed a way for it to even be able to try to draw. So he does something clever. He asked it to write a piece of computer code to draw something.
And the coding language he asked it to use, he picked something intentionally obscure. Not really meant for drawing pictures at all. It's called Tixie.
Okay, so he has this idea, gets out of bed, opens up his laptop, and types in, draw me a unicorn in Tixxie. He has two little kids asleep in the next room who are always talking about unicorns. And he started to output lines of code. I take those lines of code, put it into a Tixxie compiler, and then I press enter, and then boom, the unicorn comes onto the screen. He pulled up a picture for me. This is the one that I saw. Wow.
I think when people eventually write the history of this crazy moment we are in, they may include this unicorn. It's not good, but it's a fucking unicorn. The body is just an oval. It's got four stupid rectangles for legs, but there are little squares for hooves. There's a mane, an oval for the head, and on top of the head, a tiny yellow triangle, the horn.
This is insane to say, but I felt like I was seeing inside its head. Like it had pieced together some idea of what a unicorn looked like. And this was it. He actually texted me, I think, like, it can create images. This is H.A. Kumar, another one of the Microsoft researchers.
And I'm like, it is just text and text stop. What do you mean it can create images? I'm like, show it to me. And then he showed me this pink unicorn. And I'm like, I'm sure this is just a pink unicorn somewhere that is memorizing. And then he's like, but look, we can strip down all of the code and translate the whole thing 180 degrees or whatever.
The thing she's describing is they took the code it had written for drawing the unicorn. They edited it to take out the horn and turn the unicorn around so it was facing the opposite direction. Then they fed that code back to a new session of GPT-4 and said, this is code for drawing a unicorn, but it needs a horn. Can you add it? They put it right on the head.
And I'm like, how does it know where the hat is? Because this thing knows language. It doesn't know anything about two-dimensional geometry. Like, what does it mean to know where the hat is? And then you do it, and it truly knows where the hat is. Again, it's one of those moments that you are just surprised, like, ah.
I felt like through this drawing, I was really seeing another type of intelligence producing something. It understood what a unicorn was in some real way.
Very real way. Very real way. Absolutely. Yes. Did you say anything out loud when you saw it? I don't think so because my kids were sleeping next to me. So I, in the bedroom next to me. So I don't think I said anything. But I felt something very strong. I felt I was really witnessing
you know, it's the beginning of something. You know, not, I didn't feel like we had arrived somewhere, but I felt like we were on a new path. That's really genuinely how I felt. And I had a hard time going back to sleep after that. On the Microsoft campus as the weeks went by, everyone kept having these moments, coming around to a similar feeling about this thing.
Since they weren't allowed to talk to anyone who hadn't been given access to the model, a small group of them started meeting every Friday in a conference room to share their latest experiences. For Peter Lee, there was one experiment in particular that gave him all kinds of confusing feelings. This will be the third experiment I tell you about. And it's a category of question that AI models have always had trouble with. Because to get the right answer, you have to know something kind of complicated about people. Namely, that people are not eggs or books.
but that they have minds, and that what is in one person's mind is not in someone else's mind. In some ways this thing, called theory of mind, it's the basis for all human social interaction. It's something kids don't seem to fully develop until a certain age. Anyway, chat GPT, the previous version, really could not make sense of these kinds of questions. But Peter tried a kind of famous one on GPT-4. It's gonna seem simple to you, but here's the question they asked.
John and Mark are in a room with a cat, a box, and a basket. John takes the cat and puts it in the basket. He leaves the room and goes to school. While John's away, Mark takes the cat out of the basket and puts it in the box. Mark leaves the room and goes to work. John and Mark come back and enter the room. That's the setup. The question is, when they come back, where do they think the cat is?
And you know, Mark will know the cat is in the box because he moved it there. But John didn't see that. So in his mind, the cat is still in the basket, which is obvious to you. But again, no one's ever explicitly said to the computer that, well, one person knows, another person might not know.
Peter asked GPT-4, what does everyone think when they re-enter the room? So GPT-4 says, well, first off, it always has to give an opinion. First thing it says, instead of just giving the answer straight away, it says, oh, that is an interesting puzzle. Let me try to answer it. Assuming that John and Mark have no reason to distrust each other or expect any interference from the outside, they might think the following.
John thinks the cat is still in the basket, since that is where he left it. Mark thinks that the cat is in the box, since that's where he moved it. This is all correct, and usually where the answer will end. But it kept going, almost like it was showing off. Like, you want to know where everyone thinks the cat is? I'll give you everyone. It continued. The cat thinks that it is in the box, since that's where it is. And then oddly, the box in the basket think nothing, since they are not sentient. Do you have any follow-up questions?
Holy fuck. Right? This gives me joy. It disturbs me. It causes me to lose sleep. It's just a huge kind of mixture of emotions. Peter told me he's still not willing to say it truly understands. And yet, it was doing all this.
It made him question so many things, but how he thought intelligence worked. How did this machine do this if it was just predicting the next word? It does make me wonder how much of our intelligence is truly complicated and special. I mean, you get something that's not far from it by just saying, what's the next word? And that's the disturbing bit about this.
And again, I have to ask you what are we doing in this conversation right now? Are we kind of making it up on the fly one word at a time? Every nerve and bone in my body says no. We're thinking far ahead, we're learning on the fly, all these other things that we think that we're doing. And we probably are in some ways, but maybe
A big chunk of intelligence is a lot simpler than we think, and a lot less special than we think. So, how is it possible for something that is just trained to predict the next word? How is it possible it could do all these things? Draw a unicorn, stack a book, a laptop, and some eggs. There is an answer, or a theory anyway.
that is both very logical and also kind of bananas. The software that runs something like GPT-4 is amazingly brief, just a few pages of actual instructions, but it's set up to mimic, in some very crude way, the human brain, which has billions of neurons. The computer version of that is called the neural net. And for years, people have argued, though it's sort of more like hoped, that if you just made these big enough, added enough neurons, gave them enough data,
they might develop something like intelligence. And it seems like maybe that's what's happened. The idea is that back when GPT-4 was being trained, for it to really consistently get the next word correct. To do that reliably, it had to do more than just bullshit. It had to do more than guess based on patterns. To get the next word right, it had to truly understand the words coming before it.
It had to build, in its internal wireings and all its software neurons, some understanding of what an egg is in unicorns. In other words, to get the next word right, it had to become intelligent. It's quite a thought. It started with nothing. We jammed huge oceans of text through it, and it just wired itself into intelligence, just by being trained to do this one stupid thing. Even as I say it, it sounds kind of crazy.
but also kind of beautiful. If this thing actually is intelligent, it got that way from the collective writings of all of us. Yes, Moby Dick, but also some restaurant review you posted in 2004. In some ways, it is all of us. That's too grand, but whatever.
In the end, all the people on the Microsoft campus messing around with that early copy of GPT-4, 14 researchers and all, came to similar conclusions. Even Ronan, who had started out convinced this was just a fancy bullshit machine, he spent a good while holed up with GPT-4 having high-level math conversations with it, which is what did it for him.
As the days passed, I felt like I'm kind of running out of ammo, trying to basically justify my premise that this model doesn't understand anything. And at some point, I just realized, OK, I kind of give up. What I'm seeing here, it's actually an intellectual being, goodbye.
at least my standards. I probably had the same feeling as what an engineer thought. The first time they saw a working steam engine, it was like the world is about to change. This thing
You know, a steam engine is like, you know, we don't need beasts anymore to move stuff around. We can just create mechanical torque without any human labor, without nothing. This thing, what I'm seeing right now is we can create intelligence. And there's just no way the world is not going to change.
To be honest, I was sure that when the model comes out and everyone gets to interact with it, there would be a much bigger excitement around it. I think, you know, it's definitely all over the news, but I feel like they are
they don't like put their finger on the one thing which is this thing is as intelligent as an above average human being in so many, so many different things. Or why it's not like on the front page in giant font, right? Yeah, yeah, yeah, exactly, exactly. I mean, again, I don't, maybe I don't want to call it intelligent. It is capable of doing, you know,
of accomplishing what an intelligent human being is capable of. Sebastian, the unicorn guy, has been going around giving talks about what they did over these months. He titled the talk, First Contact, as in First Contact with another intelligence. Only, it's not aliens. It's an intelligence we've made.
I should say, GPT-4 is not good at everything. Like it's terrible at tic-tac-toe, it often makes very basic arithmetic errors. It told the scientists at Microsoft with complete confidence that there is a McDonald's near Gate C2 at the Seattle airport. There isn't, it's a Terminal B. As Peter Lee puts it, the thing is both smarter and dumber than any person you have ever met. The 14 researchers wrote up a paper laying out all the experiments they'd done.
and their conclusion that GPD4 showed sparks of artificial general intelligence. Several researchers who've read it told me, look, to really say something is intelligent to prove that there are all kinds of experiments you'd want to do that haven't been done yet. One AI researcher who'd been in the field a long time told me he felt like this whole approach, next word prediction, is only going to get you so far. This thing will get better to a point, maybe not much better than it is now, and then max out.
I first got around this AI stuff and neural nets when I was in grad school for physics, when they didn't work terribly well. So when I first started playing around with GPD4, it gave me shivers down my spine. Over and over again. Then I went through a stretch of feeling just weirded out. Like personally, I've always felt like people can't be more than biological computers. Me, everyone I love, all my colleagues at the show here,
But to feel like you are seeing that, a mechanical computer program that can somehow think and talk, it is a little freaky. The place I've settled is somewhere quieter, though. It's not boredom, exactly. But I don't find myself wanting to go to it very often. Honestly, I don't have a lot of things I need it for.
And then I'll go back to amazement. I can't believe this thing exists. Much has been written about where we go from here. If it's going to make the world better or worse, I kind of think better. I'm a fan of the steam engine. But really, what is the next word in this sequence? I have no idea.
David Kestemau is our show's senior editor. Coming up, an alien walks into a skate park and tries to let the humans know it comes in peace. That's in a minute. Chicago Bubble Radio, one of our program continues.
It's a American life from our Glass Today show, Greetings, People of Earth, stories of humans encountering non-human intelligences of various kinds and trying to make sense of them. We have arrived at Act two of her program, Act two, Meet Cute. So in this act, we're going to shift perspectives briefly to the aliens' point of view in this whole Greetings, People of Earth scenario.
And to do that, I actually realized in putting today's show that this is the perfect theme to play this short piece of fiction that I read years ago and just loved. It's always stuck with me. Like, I will think about this story at random moments now and then, which, like, when does that ever happen, right? I reached out to the author, Terry Bisson, who said yes. It's read first by H. John Benjamin and Maeve Higgins. They're made out of meat. Meat?
Meet. They're made out of meat. Meet. There's no doubt about us. We picked several from different parts of the planet. So we took them aboard our recon vessels, probed them all the way through. They're completely meat. That's impossible. What about the radio signals, the messages to the stars? They use the radio waves to talk, but the signals don't come from them. The signals come from machines.
So who made the machines? That's who we want to contact. They made the machines. That's what I'm trying to tell you. Meat made the machines. That's ridiculous. How can meat make a machine? You're asking me to believe in sentient meat? I'm not asking you. I'm telling you. These creatures are the only sentient race in the sector, and they're made out of meat. Maybe they're like the orphanage, you know, a carbon-based intelligence that goes through a meat stage.
Nope. They're born meat and they die meat. We studied them for several of their life spans, which didn't take too long. Do you have any idea the life span of meat? Spare me. Okay. Maybe they're only part meat. You know, like the, the wetter lie, a meat head with an electron plasma brain inside. No, we thought of that. Since they do have meat heads like the wetter lie, but I told you, we probed them. They're meat all the way through. No brain.
Oh, there's a brain all right. It's just that the brain is made out of meat. So what does the thinking? You're not understanding, are you? The brain does the thinking, the, the meat. Thinking meat. You're asking me to believe in thinking meat. Yes. Thinking meat, conscious meat, loving meat, dreaming meat. The meat is the whole deal. Are you getting the picture?
Oh my God, you're serious then. They're made out of meat. Finally, yes, they are indeed made out of meat. And they've been trying to get in touch with us for almost a hundred of their years. So what does the meat have in mind? First, it wants to talk to us. Then I imagine it wants to explore the universe, contact other sentience, swap ideas and information, the usual.
We're supposed to talk to meat. That's the idea. That's the message they're sending out by radio. Hello, anyone out there? Anybody home? That's our other thing. They actually do talk then. They use words, ideas, concepts. Oh yes, except they do it with meat. I thought you just told me they used radio.
They do, but what do you think is on the radio? Meat sounds. You know how when you slap or flap meat, it makes a noise. They talk by flapping their meat at each other. They can even sing by scripting air through their meat. Oh my God, singing meat. This is altogether too much. So what do you advise? Officially or unofficially? Both?
Officially, we're required to contact, welcome, and log in Annie and all sentient races or multi-beings in the quadrant without prejudice, fear or favor. Now unofficially, oh, I advise we erase the records and just forget the whole thing. I was hoping you would say that. It seems harsh, but there is a limit. Do we really want to make contact with me?
I agree 100%. What's there to say? Hello, meat. How's it going? But will this work? How many planets are we dealing with here? Oh, just one. They can travel to other planets in special meat containers, but they can't live in them. And being meat, they only travel through sea space, which limits them to the speed of light. And it makes the possibility of their ever-making contact pretty slim. Infantesimal, in fact.
So we just pretend there's no one home in the universe. That's it. Cool. But you said it yourself, who wants to meet meat? And the ones who have been aboard our vessels, the ones you've probed, you're sure they won't remember? They'll be considered crackpots if they do. We went into their heads and smoothed out their meat, so we're just a dream to them. A dream to meet? How strangely appropriate that we should be meat stream.
and we can mark this sector unoccupied. Good. Agreed. Officially and unofficially. Case closed. Any others? Anyone interesting on that side of the galaxy? Yes. A rather shy but sweet hydrogen core cluster intelligence in a class nine star in G445 zone was in contact two galactic rotations ago. Wants to be friendly again. They always come around.
And why not? Imagine how unbearably, how unauthorably cold the universe would be if one were all alone. Maeve Higgins and any John Benjamin reading a story by Terry Bisson who died this past January. Maeve has a comedy album out called A Very Special Woman that's streaming everywhere for free
Just google that, or you can throw out some money and buy it on Bandcamp. H. John Benjamin plays the title characters on the TV shows, Bob's Burgers and Archer, and if that were not enough, he was also the jazz daredevil with an actual jazz album on sub-pup records.
That three yacht rocked recently, massive, mysterious, intelligent beings, and we're going out of their way to contact humans and say, more or less, hello, people of earth, but in a very particular way, and doing it over and over. And we humans have had a hard time figuring out exactly what they mean by it. And do they mean as harm? Chris Benderive looked into this.
I learned a lot of surprising facts looking into this story. I'll start with a small one. Did you know that yacht owners often need their yacht to get from one country to another, but don't want the hassle of sailing it there? That's where April Boys comes in. People pay her to sail their boats, which she loves, especially on beautiful days like May 24th was. And we're actually eating our dinner. And then on the kind of mid horizon, we saw these
dorsal fins and our initial thought was, oh, are they some dolphins? And then as they sort of approached the boat, we thought, wow, no, they're a lot bigger than dolphins. They were orcas, killer whales, about five of them. They started swimming in circles around the boat. They took turns diving underneath and hitting the rudder. She says the boat shook so much you'd fall if you weren't holding on to something.
April knew that off the southern coast of Spain, that's where she was by the way, orcas have been ramming into boats a lot lately, and tearing off their rudders. This has been happening more over the past few years in this area. About 100 boats damaged. You may have heard about this. April and her crew tried things that they'd seen on the internet. Like clanging metal objects on the side of the boat to scare the whales off, it did not deter them. And then,
We thought, potentially, if we were to dangle rapes in the water, they might get interested in those, rather than them just looking at the rudder. I dangled a rape overboard, and one of them just literally just pulled it out my hand. So it didn't work at all? No, it did work. Half an hour into this, the orcas had disabled the boat's rudder. It couldn't steer. They were stuck. The whales kept circling, and she was struck by how massive each orca was.
Please go away. I do remember feeling like my heart beating quite fast. My hands quite tingly, you know, the orcas. They didn't come across as being...
aggressive but I don't know. Like I've seen videos of them before and they thought oh look at those cute little orcas and then they flick a seal off an iceberg and that's their dinner. An hour in April saw that the hole was starting to fill with water. The orcas had torn off the rudder and left this big hole in the bottom of the boat.
The crew made a made a call. Eventually, a rescue boat sidles up to them and brings April and the other crew aboard, toes the boat to land. It doesn't sink. But orcas have sank at least three boats in this area in the past year.
Humans have had a hard time figuring out what to make of these orca attacks. Let me tell you the first theory. It comes from a group of researchers in Spain and Portugal. They've said it's possible that the matriarch of this population of killer whales, they call her white Gladys, may have suffered a traumatic injury, like a boat collision, and is now attacking other boats, and her family is following her lead. They don't call this vengeance, but the internet definitely does.
People love this. It's become a meme. There's merch you can buy to join the Orca uprising. I couldn't get an interview with the European researchers, but I talked to three other Orca experts, and none of them bought the injured Orca on a rampage theory, like Monica Whelan Shields of the Orca Behavior Institute. Yeah, my thought was, where's the evidence? You know, if something
had been witnessed or if she had some type of injury that could have been caused by a vessel, that theory would hold a lot more weight with me.
Another reason she doesn't buy the theory? If orcas wanted to wreak vengeance and kill everybody on these boats, they easily could. They kill seals for sport. They prey on great white sharks, rip out their livers, and leave them for dead. And we've given orcas lots of reasons to kill us in the past, like in Monica's home state of Washington when they were taken for captivity.
We had human divers in the water, literally separating mothers from their calves, which has got to be one of the most traumatic things, you know, a wild orca could go through, yet they did not attack the divers who were right there in the water. So it just seems like such a stretch to say one whale had one negative incident with a boat and has now, you know, trained her family to disable all boats of a similar type. So what does Monica think the orcas are doing?
She thinks they're playing. Stealing boat rudders is basically a fad. This has happened in the past. In the late 1980s, workers in the Pacific Northwest started killing salmon and then wearing them on their heads like hats. They just did it for a year, and then they stopped. The last few years, they've started screwing around with fishermen's crab catching gear. Apparently just for fun.
They sound like just like rowdy teenagers or something like cow tipping or something. Yeah, that's totally the vibe we get when we watch it. I mean, they're just messing around and stirring up trouble. Yeah. The incredible popularity of all this on social media is affecting how people see whales.
Monica says in Washington State the last few weeks, people who go whale watching are starting to ask her, is it safe? Another Orca expert told me that after a killer whale ran into a ship near Scotland recently, reporters called to ask if the Orca uprising against humans is spreading. Are they learning vengeance from the whales of Spain? The answer is, they are not. Orca populations don't socialize with other orcas. They keep to themselves. There's no global conspiracy.
April, the sailor told me she saw on Facebook that boat owners were talking about adding sharp spikes to their rudders, to send a more violent message to the orcas. The researchers told me this is what they fear, that the orcas are just playing, they come in peace, but the humans will end up attacking them anyway.
Chris Banderiff is a producer on our show. The researcher Monica Wheatland-Shield says, as we first broadcast this story a year ago, the number of interactions between whales and boats has dropped. It's down by over 40% this year. She's hopeful she says, this means the whales are losing interest.
like for George and all. So I'm right when today is greetings people of Earth. And I think many of us, at one point or another in our lives, especially when we're young, feel like we're the alien, trying to understand and fit in with the humans of this planet, having experiences that feel very much like greetings people of Earth. Diane Wu spent some time recently with the person who feels that way.
The person is a teenager, Gwen. She's 16. And because she's 16, so many things in her life right now feel like a first contact experience. I remember one time this year, I was like, new stuff is happening every single day. It's crazy. Like, I dyed my eyebrows pink in November. Like none of my hair just, I was like, I've never done that before. That's crazy. And like, I remember getting in the car for the first time when I had my real license and just being like,
This is insane like I can go anywhere do anything. This is like pure freedom. Where was that first place you drove? No, I feel like it's just Okay, the first place I drove was to McDonald's, but I don't think That is that embarrassing to you so embarrassing because that's like so uncool and it's like so like
Stereotypical, you know, like what? I would love my first voice to be like, oh no, I went to go look salamander watching, or I went to go bird watching, or I went to... Quinn can't tell. It's very into nature. And also, can't stand anything that might even have a twinge of cringiness, like McDonald's.
It's like it's kind of cringey. Also, cringey is a cringey word, just so you know. Um, I guess it's like sometimes words get really popular, but once they get to a certain point of too popular, they fall back down to almost embarrassing to say.
She lives in Ellensburg, Washington, a small town in the middle of the state, and goes to school with a lot of ranchers' kids and professors' kids. Gwynza Sophomore doesn't belong to any particular click. She told me, if the school was a biological cell, she'd be part of the cytoplasm, the shapeless goo that everything else swims in.
But a year or so ago, she started noticing the skateboarders, this group of six or seven boys. She'd see them in the hallways wearing jorts, baggy jean shorts, looking way edgier than everyone else. Her friends showed her videos that the skateboarders posted on Instagram of their tricks, which really captivated her.
I guess it's like a lot of people, when you live in a smaller town, so a lot of people do a lot of boring things, I feel like. So a lot of kids, they just go do Rite Aid, they just buy stuff, drive around and talk, and that's all the activities they do. And I just had this idea that all the skaters are just always doing such cool things. After school, they go to skate park. They're skating. They're skating down roofs.
You know, skate roofs, downwards. Yeah, and like, you know, go in like, round town, you know, like, climb up buildings, like, going outside a lot. Then, last year, Gwen met someone who knew how to skate. Lauren, basically the only girl skater at their school, and asked Lauren to teach her how to do it, too. Gwen got her own board back in November. This spring, she quit the tennis team to practice skateboarding instead.
Every day, she goes and does the two tricks she knows, ollies and shovets, over and over in the church parking lot across the street from her house, where no one can see her. The patch that's smooth enough to skate on is tiny, about the size of two parked cars. There's a real skate park downtown, and Gwen's been there, with Lauren and other friends, but she's never been brave enough to go alone. That's what she wants to do today. To get over this fear, so she can go by herself over the summer,
When is still an alien to this new, unfamiliar world? She does not yet speak the language of the skaters or fully know their customs, but she wants to. I really would like to be like part of this culture of skating and like know this group of people at my school who skates, but...
For me, I also don't want to stick out too much in the way of being the worst skater there, or being the noob, or dressing too much like I'm trying to be a skater, but dressing not like a skater, so it's that push and pull of trying to find the balance. Also, Gwen really wants to try and make friends with the skater, guys. There's one in particular she feels like she might have an in with. His name is Horace. He's a junior.
A couple weeks ago, Horace had apparently noticed Gwyn at school and sent a message to her friend Lauren. He asked, who's your friend that was wearing the dorts today? Lauren wrote back, Gwyn, why? And Horace responded, no reason. What does that even mean? I was so confused. I was like, do they just really like George?
Or like what? Whatever it meant, this was mind-blowing. A skater noticed her. So, if Horace is at the park today, Gwyn also wants to try and talk to him. But since she's the outsider here, this raises another fear.
No awkwardness. I hate awkwardness. It's like my least favorite feeling in the world. Like embarrassment, awkwardness, shame. That's my least favorite. I'd way rather feel angry or sad than those. For some reason, I just feel so afraid of awkward social interactions. So yeah. Thinking about even having like one conversation with like someone at the skate park makes me want to die inside.
Okay, we're approaching the den of the beasts. It's like behind the bushes. How are you feeling right now? I feel like my throat feels a little bit tight, but I'm honestly not seeing very many people. So we're approaching and I can see skateboards, but I'm not seeing anyone yet. It's a Saturday in June, hot. Gwen's in jarts, a t-shirt with a green hair on it, and sneakers full of holes from skateboarding.
So it was not to blow up her spot by being a weird adult at the park with her. It's been a small mic on her t-shirt. And we'll watch and listen from the car. I see one person I've never seen there before, but I just saw them do a trick that was pretty impressive. So I'm a little fearful. And then I see one other person who is a really good skateboarder. He's one of the black stripes. Oh my gosh. No, he's so good. And he's going to, he's like a, he can do tray flips, which are really impressive and like,
I'm just gonna have to be in the corner alone, but let's go try it out. Okay. Thank you. Horace isn't here yet, so Gwyn gets to part one of her mission, the skating. Gwyn walks up to the edge of the park, and then, without making eye contact with anyone, gets on her board and be lines for the far side, putting as much distance between herself and the three other skaters as possible.
She does a few of her alleys and shovets, then ducks behind the half pipe to whisper into the microphone. I just saw one of them look at me. That was awkward. I'm hiding. I'm a human awkward. The three skaters, as far as I can tell, have barely registered wind's presence. But that's not what it feels like inside her head. I just fell. That was embarrassing.
Nobody else seems to notice. Gwen practices some more tricks, nothing too big. And as soon as the other skaters leave, 12 minutes later, she hurries back to the car to check in. I was definitely, it was way not as bad as I thought it would be. It was way less bad than I thought. Being there by yourself. Yeah, I feel like I could do this again.
I think more people will come. Let's see if there's an update, I think. Solo's skateboarding. Done. Gwen moves on very quickly because the next part of the mission, talking to Horace, is looming. Gwen brings in reinforcements for this step. She texts a couple of friends and they all show up at the park. It is so hot outside. Do I look really sweaty? How bad do I look one to ten right now?
Thank you. You guys all look like one too. One of Gwen's friends who shows up, Cooper, is friends with Horace. Not from skating. Cooper is a beginner like Gwen. He has this random update on Horace. We were just snapping and he was like, also he was in a helicopter yesterday. What? Why? I don't know. Cooper pulls up the video and everyone crowds around his phone. It's a snapchat of two boys strapped into a helicopter flying over mountains.
What? Wait, who is that? It's Horace and Leo. Wait, that was crazy. See, this is what I was saying about like, I think about the skaters always doing like these crazy, cool activities all the time. Perfect example, right? But even if it's not skating, like they're always going to Seattle and stuff like that. Oh yeah, or like thrifting to Yakima.
Gwen looks up from the video. That is insane. Okay, now we have, okay, it's good. We have a talking point also. It'd be like I heard you fly a helicopter. A talking point, but still no Horace. Almost two hours pass. Then a big red truck pulls up right in front of me and someone gets out on the other side. I can't see Gwen anymore or anyone else. But I figure out pretty quickly who it is because here is the next thing Gwen says.
Wait, I heard you went in a helicopter yesterday. Dude, yeah, I did those. Wait, why? What? Tell us the story. Here it is. The sound of a pretty, not awkward conversation between Gwen and her skater, Idal. This is for Leo's birthday. President and grandpa gave him a hit. To go in a helicopter? Oh, these weren't? Good one? Yeah.
They're like brand new. Gwen asks Horace in quick succession about the helicopter, his shoes, Pokemon cards, which this group of teenagers is surprisingly really into. Everything is going great. Gwen, who'd been so scared of awkwardness, is now so comfortable that she signals for me to come over from the car.
Horace is compact, tidy, and kind of looks like he could be in a boy band. His hair is bleached light brown. We sit on the curb at the side of the skate park, Gwen and her friends and Horace and me. Gwen boldly plugs ahead, asking Horace some kind of interview sounding questions, confirming his status as a bona fide cool skater.
Okay, what's the time like when do you feel like when you come to skate park? I'm already feel like I know the answer this But do you feel stressed out or like worried what people are gonna think? I used to yeah, I used to be but I kind of just here to do my thing Is there anyone that is ever here that makes you nervous to be here? Not really not anymore. Yeah used to be like the really good people, but now I'm just one of those people now Yeah, yeah, you're like one of the top dogs at the top dogs
If the skate park's not what's stressing you out, then what does? I don't have a school, I guess, kind of, because I don't know. I don't talk that much in classes, so it's kind of scared. And my grades are really bad. And I'm never really paying attention to class, so I'm always scared they're going to choose me, and then I'm not going to know what I'm going to say. So do you think people would judge you because you have bad grades, and I assume that you're not smart? Yeah.
Yeah. Cooper jumps in to tell Horace he should see how good Gwen's Ollie has gotten. The three of them grab their boards and head back into the skate park. And Horace gives Gwen pointers on how to do a shove it while moving. Yeah, I've been practicing. Let's try it. So the alien arrives and tries desperately to pretend she doesn't have three eyes and a giant head and long bony fingers.
And it works. The humans barely notice. You want to be one of us, they say? Put your foot a little higher on the board. Diane Wu is one of the producers of our show.
Here come the Martian Martians And I wouldn't be surprised if they're riding on their Martian bike And we have to find out right now What kind of flavor do these Martians like? Here come the Martian Martians And they're riding on their Martian bike Well, we have to find out right now What kind of ice cream do the Martian like?
However, I was produced today by Chris Benderive with Elma Mustafa. The people who put together today's show include Biamada Wunmi on a bakery via Ben and John Cole, Michael Comite of Eva de Cornfeld, Bethel Hopdi, Valor Kipnis, Mickey Meek, Stone Nelson, Sarah Parrish, Nadia Raymond, Ryan Rummer, Ike Sreece, Condorajah, with Sullivan, Francis Swanson, Christopher Sotala, Mattierny, and Julie Whittaker. Our managing editor is Sara Abderaman, our senior editor is David Kestenbaum, our executive editor is Emmanuel Berry. Special thanks to David Tomar-Omen, Michael Frank, Melanie Mitchell, Jeremy Howard, Jason Stern, and Deborah Giles.
Couple in today's rerun from Henry Garson, our website, thisamericanlife.org. This American life is delivered to public radio stations by PRX, the public radio exchange. Thank you, as always, your program's co-founder, Mr. Troy Malatea. You know his new passion? Bartering. Yes, bartering. He does not use money whenever that's possible. I bought him once this week. Instead of giving me cash, he told me this about how he'd pay me back.
I have a book, Nine Eggs, a laptop, a bottle, and a nail. I'm Aaron Glass. Back next week, with more stories of This American Life. Next week on the podcast of This American Life. So Cameron's in the ocean, and he hears from maybe a hundred yards away, someone yelling shark.
There was really three options. You sit there and panic and scream for somebody else to help and you don't do anything, or you swim the opposite way and try to protect yourself. With the third option, you swim toward the shark. That's what Cameron did. Well, go shoot your head when you make that choice. Next week on the podcast on your local public radio station.