AI Won’t Really Kill Us All, Will It?

For months, more than a thousand researchers and technology experts involved in creating artificial intelligence have been warning us that they’ve created something that may be dangerous, something that might eventually lead humanity to become extinct. In this Radio Atlantic episode, The Atlantic’s executive editor, Adrienne LaFrance, and staff writer Charlie Warzel talk about how seriously we should take these warnings, and what else we might consider worrying about.

Listen to the conversation here:

Subscribe here: Apple Podcasts | Spotify | Stitcher | Google Podcasts | Pocket Casts


The following transcript has been edited for clarity.

Hanna Rosin: I remember when I was a little kid being alone in my room one night watching this movie called The Day After. It was about nuclear war, and for some absurd reason, it was airing on regular network TV.

The Day After:

Denise: It smells so bad down here. I can’t even breathe!

Denise’s mom: Get ahold of yourself, Denise.

Rosin: I particularly remember a scene where a character named Denise—my best friend’s name was Denise—runs panicked out of her family’s nuclear-fallout shelter.

The Day After:

Denise: Let go of me. I can’t see!

Mom: You can’t go! Don’t go up there!

Brother: Wait a minute!

Rosin: It was definitely, you know, “extra.” Also, to teenage me, genuinely terrifying. It was a very particular blend of scary ridiculousness I hadn’t experienced since—until a couple of weeks ago, when someone sent me a link to this YouTube video with Paul Christiano, who is an artificial intelligence researcher.

Paul Christiano: The most likely way we die is not that AI comes out of the blue and kills us, but involves that we’ve deployed AI everywhere. And if, God forbid, they were trying to kill us, they would definitely kill us.

Rosin: Christiano was talking on this podcast called Bankless. And then I started to notice other major AI researchers saying similar things:

Norah O’Donnell on CBS News: More than 1,300 tech scientists, leaders, researchers, and others are now asking for a pause.

Bret Baier on Fox News: Top story right out of a science-fiction movie.

Rodolfo Ocampo on 7NEWS Australia: Now it’s permeating the cognitive space. Before, it was more the mechanical space.

Michael Usher on 7NEWS Australia: There needs to be at least a six-month stop on the training of these systems.

Fox News: Contemporary AI systems are now being human-competitive.

Yoshua Bengio talking with Tom Bilyeu: We have to get our act together.

Eliezer Yudkowsky on the Bankless podcast: We’re hearing the last winds begin to blow, the fabric of reality start to fray.

Rosin: And I’m thinking, Is this another campy Denise moment? Am I terrified? Is it funny? I can’t really tell, but I do suspect that the very “doomiest” stuff at least is a distraction. There are likely some actual dangers with AI that are less flashy but maybe equally life-altering.

So today we’re talking to The Atlantic’s executive editor, Adrienne LaFrance, and staff writer Charlie Warzel, who’ve been researching and tracking AI for some time.

___

Rosin: Charlie, Adrienne—when these experts are saying, “Worry about the extinction of humanity,” what are they actually talking about?

Adrienne LaFrance: Let’s game out the existential doom, for sure. [Laughter.]

Rosin: Thanks!

LaFrance: When people warn about the extinction of humanity at the hands of AI, that’s literally what they mean—that all humans will be killed by the machines. It sounds very sci-fi. But the nature of the threat is that you imagine a world where more and more we rely on artificial intelligence to complete tasks or make judgments that previously were reserved for humans. Obviously, humans are flawed. The fear assumes a moment at which AI’s cognitive abilities eclipse our species—and so all of a sudden, AI is really in charge of the biggest and most consequential decisions that humans make. You can imagine they’re making decisions in wartime about when to deploy nuclear weapons—and you could very easily imagine how that could go sideways.

Rosin: Wait; but I can’t very easily imagine how that would go sideways. First of all, wouldn’t a human put in many checks before you would give access to a machine?

LaFrance: Well, one would hope. But one example would be that you give the AI the imperative to “Win this war, no matter what.” And maybe you’re feeding in other conditions that say “We don’t want mass civilian casualties.” But ultimately, this is what people refer to as an “alignment problem”—you give the machine a goal, and it will do whatever it takes to reach that goal. And that includes maneuvers that humans can’t anticipate, or that go against human ethics.

Charlie Warzel: A sort of a meme of this that has been around for a long time is called “the paper clip–maximizer problem.” You tell a sentient artificial intelligence, “We want you to build as many paper clips as fast as possible, and in the most efficient way.” And the AI goes through all the computations and says, “Well, really, the thing that is stopping us from building as many paper clips as we can is the fact that humans have other goals. So we better just eradicate humans.”

Rosin: Why can’t you just program in: “Machine, you are allowed to do anything to make those paper clips, short of killing everyone.”

Warzel: Well, let me lay out a classic AI doomer’s scenario that may be easier to imagine. Let’s say five, 10 years down the line, a supercomputer is able to process that much more information—on a scale of a hundred-X more powerful than whatever we have now. It knows how to build iterations of itself, so it builds a model. That model has all that intelligence—plus maybe a multiplier there of a little bit.

And that one builds a model, and another one builds a model. It just keeps building these models—and it gets to a point where it’s replicated enough that it’s sort of like a gene that is mutating.

Rosin: So this is the alignment thing. It’s suddenly like: We’re going along, we have the same objectives. And all of a sudden, the AI takes a sharp left turn and realizes that actually humans are the problem.

Warzel: Right. It can hack a bank; it can pose as a human. It can figure out a way through all of its knowledge of computer code to either socially engineer by impersonating someone—or it can actually hack and steal funds from a bank, get money, pose as a human being, and basically get someone involved by funding a state actor or a terrorist cell or something. Then they use the money that it’s gotten and pay the group to release a bioweapon, and—

Rosin: And, just to interject before you play it out completely, there’s no intention here. Right? It’s not necessarily intending to gain power the way, say an autocrat would be, or intending to rule the world? It’s simply achieving an objective that it began with, in the most effective way possible.

Warzel: Right. So this speaks to the idea that once you build a machine that is so powerful and you give it an imperative, there may not be enough alignment parameters that a human can set to keep it in check.

Rosin: I followed your scenario completely. That was very helpful, except you don’t sound at all worried.

Warzel: I don’t know if I buy any of it.

Rosin: You don’t even sound somber!

LaFrance: [Laughter.] Why don’t you like humans, Charlie?

Warzel: I’m anti-human. This is my hot take. [Laughter.]

Rosin: But that was a real question, Charlie. Why don’t you take this seriously? Is it because you think steps haven’t been worked out? Or is it because you think there are a lot of checks in place, like there are with human cloning? What is the real reason why you, Charlie, can intelligently lay out this scenario but not actually take it seriously?

Warzel: Well, bear with me here. Are you familiar with the South Park underpants gnomes?

South Park Gnomes (singing): Gotta go to work. Work, work, work. Search for underpants. Hey!

Warzel: For those blissfully unaware, the underpants gnomes are from South Park. But what’s important is that they have a business model that is notoriously vague.

South Park Gnome: “Collecting underpants is just Phase 1!”

Warzel: Phase 1 is to collect underpants. Phase 2?

South Park Gnome 1: Hey, what is Phase 2?

South Park Gnome 2: Phase 1, we collect underpants.

Gnome 1: Yah, yah, yah. But what is Phase 2?

Warzel: It’s a question mark.

Gnome 2: Well, Phase 3 is profit! Get it?

Warzel: And that’s become a cultural signifier over the last decade or so for a really vague business plan. When you listen to a lot of the AI doomers, you have somebody who is obviously an expert, who’s obviously incredibly smart. And they’re saying: Step 1, build an incredibly powerful artificial-intelligence system that maybe gets close to, or actually surpasses, human intelligence.

Step 2: question mark. Step 3: existential doom.

I just have never really heard a very good walkthrough of Step 2, or 2 and a half.

No one is saying that we have reached the point of no return.

LaFrance: Wait. But Charlie, I think you did give us Step 2. Because Step 2 is the AI hacks a bank and pays a terrorist, and the terrorists unleash a virus that kills humanity. I would also say that I think what people who are most worried would argue is that there isn’t time for a checklist. And that’s the nature of their worries.

And there are some who’ve said we are past the point of no return.

Warzel: And I get that. I’ll just say my feeling on this is that image of the Terminator 2: Judgment Day–type robots rolling over human skulls feels like a distraction from the bigger problems, because—

Rosin: Wait; you said it’s a distraction from bigger problems. And this is what I want to know, so I’m not distracted by the shiny doom movie. What are actually the things that we need to worry about, or pay attention to?

LaFrance: The possibility of wiping out entire job categories and industries, though that is a phenomenon we’ve experienced throughout technological history. That’s a real threat to people’s real lives and ability to buy groceries.

And I have real questions about what it means for the arts and our sense of what art is and whose work is valued, specifically with regard to artists and writers. But, Charlie, what are yours?

Warzel: Well, I think before we talk about exterminating the human race, I’m worried about financial institutions adopting these types of automated generative AI machines. And if you have an investment firm that is using a powerful piece of technology, and you wanna optimize for a very specific stock or a very specific commodity, then you get the possibility of something like that paper-clip problem. With: “Well, what’s the best way to drive the price of corn up?”

Rosin: Cause a famine.

Warzel: Right. Or start conflict in a certain region. Now, again—there’s still a little bit of that underpants gnome–ish quality to this. But I think a good analog for this is from the social-media era. Back when Mark Zuckerberg was making Facebook in his Harvard dorm room, it would have been silly to imagine it could lead to ethnic cleansing or genocide in somewhere like Myanmar.

But ultimately, when you create powerful networks, you connect people. There’s all sorts of unintended consequences.

Rosin: So given the speed and suddenness with which these bad things can happen, you can understand why lots of intelligent people are asking for a pause. Do you think that’s even possible? Is that the right thing to do?

LaFrance: No. I think it’s unrealistic, certainly, to expect tech companies to slow themselves down. It’s intensely competitive right now. I’m not convinced that regulation right now would be the right move, either. We’d have to know exactly what that looks like.

We saw it with social platforms, when they called for Congress to regulate them and then at the same time they’re lobbying very hard not to be regulated.

Rosin: I see. So what you’re saying is that it’s a cynical public play, and what they’re looking for are sort of toothless regulations.

LaFrance: I think that is unquestionably one dynamic at play. Also, to be fair, I think that many of the people who are building this technology are indeed very thoughtful, and hopefully reflecting with some degree of seriousness about what they’re unleashing.

So I don’t wanna suggest that they’re all just doing it for political reasons. But there certainly is that element.

When it comes to how we slow it down, I think it has to be individual people deciding for themselves how they think this world should be. I’ve had conversations with people who are not journalists, who are not in tech, but who are unbridled in their enthusiasm for what this will all mean. Someone recently mentioned to me how excited he was that AI could mean that they could just surveil their workers all the time and that they could tell exactly what workers were doing and what websites they were visiting. At the end of the day, they could get a report that shows how productive they were. To me, that’s an example of something that could very quickly be seen among some people as culturally acceptable.

We really have to push back against that in terms of civil liberties. To me, this is much more threatening than the existential doom, in the sense that these are the sorts of decisions that are being made right now by people who have genuine enthusiasm for changing the world in ways that seem small, but are actually big.

I think it is crucially important that we act right now, because norms will be hardened before most people have a chance to grasp what’s happening.

Rosin: I guess I just don’t know who “we” is in that sentence. And it makes me feel a little vulnerable to think that every individual and their family and their friends has to decide for themselves—as opposed to, say, the European model, where you just put some basic regulations in place. The EU already passed a resolution to ban certain forms of public surveillance like facial recognition, and to review AI systems before they go fully commercial.

Warzel: Even if you do put regulations on things, it doesn’t stop somebody from building something on their own. It wouldn’t be as powerful as the multibillion-dollar supercomputer from Open AI, but those models will be out in the world. Those models may not have some of the restrictions that some of these companies, who are trying to build them thoughtfully, are going to have.

Maybe you’ll have people like we have in the software industry creating AI malware and selling it to the highest bidder, whether that’s a foreign government or a terrorist group, or a state-sponsored cell of some kind.

And there is also the idea of a geopolitical race, which is part of all of this. Behind closed doors they are talking about an AI race with China.

So, there are all these very, very, thorny problems.

You have all of that—and then you have the cultural issues. Those are the ones that I think we will see and feel really acutely before we feel any of this other stuff.

Rosin: What is an example of a cultural issue?

Warzel: You have all of these systems that are optimized for scale with a real cold, hard machine logic.

And I think that artificial intelligence is sort of the truest sort of almost-final realization of scale. It is a scale machine; like it is human intelligence at a scale that humans can’t have. That’s really worrisome to me.

Like, hey, do you like Succession? Well, AI’s gonna generate 150 seasons of Succession for you to watch. It’s like: I don’t wanna necessarily live in that world, because it’s not made by people. It’s a world without limits.

The whole idea of being alive and being a human is encountering and embracing limitations of all kinds. Including our own knowledge, and our ability to do certain things. If we insert artificial intelligence, in the most literal sense it really is sort of like strip-mining the humanity out of a lot of life. And that is really worrisome.

Rosin: I mean, Charlie, that sounds even worse than the doom scenarios I started with. Because how am I—say, as one writer or Person X, who as Adrienne started out saying, is trying to pay for their groceries—supposed to take a stance against this enormous global force?

LaFrance: We have to assert that our purpose on the planet is not just an efficient world.

Rosin: Yeah.

LaFrance: We have to insist on that.

Rosin: Charlie, do you have any tiny bits of optimism for us?

Warzel: I am probably just more of a realist. You can look at the way that we have coexisted with all kinds of technologies as a story where the disruption comes in, things never feel the same as they were, and there’s usually a chaotic period of upheaval—and then you sort of learn to adapt. I’m optimistic that humanity is not going to end. I think that is the best I can do here.

Rosin: I hear you struggling to be definitive, but I feel like what you are getting at is that you have faith in our history of adaptation. We have learned to live with really cataclysmic and shattering technologies many times in the past. And you just have faith that we can learn to live with this one.

Warzel: Yeah.

Rosin: On that sort of tiny bit of optimism, Charlie Warzel and Adrienne LaFrance: Thanks for helping me feel safe enough to crawl out of my bunker, at least for now.