Thiel on AI & Racing with China
This post is a transcript of part of a podcast with Peter Thiel, touching on topics of AI, China, extinction, Effective Altruists, and apocalyptic narratives, published on August 16th 2024.
If you’re interested in reading the quotes, just skip straight to them, the introduction is not required reading.
Introduction
Peter Thiel is probably known by most readers, but briefly: he is an venture capitalist, the first outside investor in Facebook, cofounder of Paypal and Palantir, and wrote Zero to One (a book I have found very helpful for thinking about building great companies). He has also been one of the primary proponents of the Great Stagnation hypothesis (along with Tyler Cowen).
More local to the LessWrong scene, Thiel was an early funder of MIRI and a speaker at the first Effective Altruism summit in 2013. He funded Leverage Research for many years, and also a lot of anti-aging research, and the seasteading initiative, and his Thiel Fellowship included a number of people who are around the LessWrong scene. I do not believe he has been active around this scene much in the last ~decade.
He appears rarely to express a lot of positions about society, and I am curious to hear them when he does.
In 2019 I published the transcript of another longform interview of his here with Eric Weinstein. Last week another longform interview with him came out, which I listened to.
I got the sense from listening to it that even though we are in conflict on some issues, conversation with him would be worthwhile and interesting. Then about 3 hours in he started talking more directly about subjects that I think actively about and some conflicts around AI, which I think will be of interest to many here, so I’ve quoted the relevant parts below.
His interviewer, Joe Rogan is a very successful comedian and podcaster. He’s not someone who I would go to for insights about AI. I think of him as standing in for a well-intentioned average person, for better or for worse, although he is a little more knowledgeable and a little more intelligent and a lot more curious than the average person. The average Joe. I believe he is talking in good faith to the person before him, and making points that seem natural to many.
Artificial Intelligence
Discussion focused on the AI race and China, atarting at 2:56:40. The opening monologue by Rogan is skippable.
Rogan
If you look at this mad rush for artificial intelligence — like, they’re literally building nuclear reactors to power AI.Thiel
Well, they’re talking about it.Rogan
Okay. That’s because they know they’re gonna need enormous amounts of power to do it.Once it’s online, and it keeps getting better and better, where does that go? That goes to a sort of artificial life-form. I think either we become that thing, or we integrate with that thing and become cyborgs, or that thing takes over. And that thing becomes the primary life force of the universe.
And I think that biological life, we look at like life, because we know what life is, but I think it’s very possible that digital life or created life might be a superior life form. Far superior. [...]
I love people, I think people are awesome. I am a fan of people. But if I had to look logically, I would assume that we are on the way out. And that the only way forward, really, to make an enormous leap in terms of the integration of society and technology and understanding our place in the universe, is for us to transcend our physical limitations that are essentially based on primate biology, and these primate desires for status (like being the captain), or for control of resources, all of these things — we assume these things are standard, and that they have to exist in intelligent species. I think they only have to exist in intelligent species that have biological limitations. I think intelligent species can be something, and is going to be something, that is created by people. [...]
Thiel
I keep thinking there are two alternate stories of the future that are more plausible than the one you just told. One of them sounds like yours, but it’s just the Silicon Valley propaganda story. They say that’s what they’re gonna do, and then of course, they don’t quite do it, and it doesn’t quite work. And it goes super super haywire. There’s a 1% chance that [your story] works, and a 99% chance that [it goes very badly].You have two choices. You have a company that does exactly what you [want to] do. It’s super ethical, super restrained, does everything right. And there’s a company that says all the things you just said, but then cuts corners, and doesn’t quite do it.
I won’t say it’s 1:99, but that sounds more plausible, that it ends up being corporate propaganda.
My prior would be — this is of course the argument the Effective Altruists, the anti-AI people make — yeah Joe, the story you’re telling us, that’s just gonna be the fake corporate propaganda, and we need to push back on that. And the way you push back is you need to regulate it, and you need to govern it, and you need to do it globally.
The RAND corporation in Southern California, one of the things they’re pushing for is something they call “Global Compute Governance” which [says] the accelerationist AI story is too scary and too dangerous and too likely to go wrong, and so we need to have global governance, which from my point of view sounds even worse—
Rogan
Also it’s so utopian!The problem with that story is China’s not going to go along with that program. They’re gonna go full-steam ahead, and we’re going to have to go full-steam ahead in order to compete with China. There’s no way you’re going to be able to regulate it in America and compete with people who are not regulating it worldwide.
And then once it becomes sentient, once you have an artificial, intelligent creature that has been created by humans and that can make better versions of itself, over and over and over again, and keep doing it, it’s going to get to a point where it’s far superior to anything we can imagine.
Thiel
Well, to the extent it’s driven by the military, and other competition with China—Rogan
Until it becomes sentient!Thiel
—that suggests it’s going to be even less in the utopian, altruistic, direction. It’s going to be even more dangerous.Rogan
Unless it gets away from them! This is my thought. If it gets away from them and it has no motivation to listen to anything that human beings have told it, if it’s completely immune to programing, which totally makes sense that it would be, it totally makes sense that if it’s going to make better versions of itself, [then] the first thing it’s going to do is eliminate human influence.Especially when the humans are corrupt, it’s gonna go “I’m not gonna let these humans tell me what to do and what to control”, and it would have no reason to listen.
Thiel
I sort of generally don’t think we should trust China or the CCP, but probably the best counterargument they would have is that they are interested in maintaining control, and they are crazy-fanatical about that, and that’s why the CCP might actually regulate it, and they’re gonna put breaks on this in a way that we might not in Silicon Valley. It’s a technology that they understand will undermine their power–Rogan
That’s an interesting perspective, and then they would be anti… not competitive…Thiel
I don’t full believe them, but there’s sort of a weird way… all the Big Tech companies, were natural ways for the CCP to extend its power, to control the population. Tencent, Alibaba, [etc]. But then also, in theory, the tech can be used as an alternate theory for people to organize.Even though it’s 80% [likely to give the CCP greater control], and maybe 20% risk of loss of control, maybe that 20% was too high [for the CCP]. There’s a strange way over the last 7-8 years where you know, Jack Ma, Alibaba, all these people got shoved aside for these party functionaries that are effectively running these companies. There’s something about the Big Tech story in China, where the people running these companies were seen as national champions a decade ago, [but] now they’re the enemies of the people.
[...] The CCP has full-control, you have this new technology that would give you even more control, but there’s a chance you lose it. How do you think about that?
Rogan
Very good point.Thiel
That’s what they’ve done with consumer internet. There’s probably something about the AI where it’s possible they’re not even in the running. And certainly, it feels like it’s all happening in the US. And so maybe it could still be stopped.Rogan
But then there’s a problem with espionage. Even if it’s happening in the US, they’re gonna take that information, they’re gonna figure out how to get it.Thiel
You can get it, but then, if you build it, is there some air gap, does it jump the air gap…Rogan
That’s a good point, that they would be so concerned about control, that they wouldn’t allow it to get to the point where it gets there, and then we would control it first, and then it would be controlled by Silicon Valley. <laughs> And then Silicon Valley would be the leaders of the universe.Thiel
Or it spirals out of control. But then I think my — and again, this is a very very speculative conversation — but my read on the cultural-social vibe is that the scary, dystopian AI narrative is way more compelling. I don’t like the Effective Altruist people, I don’t like the luddites, but man, I think this time around they are winning the arguments. It’s mixing metaphors, but do you want to be worried about Dr Strangelove, who wants to blow up the world to build bigger bombs, or do you want to worry about Greta, who wants to make everyone ride a bicycle so the world doesn’t get destroyed. We’re in a world where people are worried about Dr Strangelove, they’re not worried about Greta, and it’s the Greta-equivalent in AI that in my model is going to be surprisingly powerful. It’s gonna be outlawed, it’s gonna be regulated, as we have outlawed so many other vectors of innovation.You can think about: why was there progress in computers over the last 50 years, and not in other stuff? Because the computers were mostly inert. It was mostly this virtual reality that was air-gapped from the real world. There’s all this crazy stuff that happens on the internet, but most of the time what happens on the internet stays on the internet, it’s actually pretty decoupled. And that’s why we’ve had a relatively light regulatory touch on that stuff, versus so many other things. But there’s no reason… if you had the FDA regulating video games, or regulating AI, I think the progress would slow down a lot.
Rogan
100%. That would be a f***ing disaster. Yeah, that’d be a disaster.Thiel
But again, they get to regulate, you know, pharmaceuticals…Rogan
They’re not doing a good job of that either!Thiel
I know but, Thalidomide, or whatever, that went really haywire, they did a good job, people are scared. They’re not scared of videogames, they’re scared of dangerous pharmaceuticals, and if you think of AI as: it’s not just a video game, it’s not just this world of bits, it’s going to [cross the] air gap, and it’s going to affect your physical world, in a real way. You know, maybe you cross the air gap and get the FDA or some other government agency that starts…Rogan
But the problem is they’re not good at regulating anything. There’s no one government agency you said that you can can see that does a stellar job.Thiel
But I think they have been pretty good at slowing things down and stopping them. We’ve made a lot less progress on extending human life, we’ve made no progress on curing dementia in 40 or 50 years, there’s all this stuff that’s been regulated to death, which I think is very bad from the point of view of progress, but it is pretty effective as a regulation. They’ve stopped stuff, they’ve been very effective at being luddites.Rogan
Interesting… I’m really considering your perspective on China an AI, it’s very…Thiel
But again, these stories are all very speculative. The counterarguent of mine would be something like: that’s what China thinks it will be doing, but it will somehow go rogue on them, or they’re too arrogant about how much power they think the CCP has and it’ll go rogue, or… so I’m not at all sure this is right. But I think the US one, I would say… I think the pro-AI people in Silicon Valley are doing a pretty bad job on, let’s say, convincing people that it’s going to be good for them, that it’s going to be good for the average person, that it’s going to be good for our society. And if it all ends up being of some version where humans are headed toward the glue-factory like a horse… man, that probably makes me want to become a luddite too.Rogan
Well, it sucks for us if it’s true.
Thiel
If that’s the most positive story you can tell, then I don’t think that necessarily means we’re going to go to the glue factory, I think it means the glue factory is getting shut down.Rogan
Maybe. Who f***ing runs the glue factory? I don’t know. I’m just speculating too, but I’m trying to be objective when I speculate, and I just don’t think that this is gonna last. I don’t think that our position as the apex predator, number one animal on the planet is gonna last. I think we’re gonna create something that surpasses us. And I think that’s probably what happens.[Cut discussion where Rogan pivots to talking about aliens[1]. Thiel briefly returns to the topic of AI before the podcast ends.]
Thiel
I think we still have a pretty crazy geopolitical race with China, to come back to that. The natural development of drone technology, in the military context, is that you need to take the human out of the loop, because the human can get jammed, and so you need to put an AI on the drones.Rogan
Well they’re using AI for dogfights, and they are 100% effective against human pilots.Thiel
All these things, there’s a logic to them, but there doesn’t seem to be a good endgame.Rogan
No. The endgame doesn’t look good. But it’s gonna be interesting, Peter. It’s definitely gonna be interesting. It’s interesting right now, right?
Why Slow Progress on Nuclear Energy?
On the related topic of apocalyptic narratives, I will include this earlier section of Thiel talking about nukes, from 49:15.
Thiel
My alternate theory on why nuclear energy really stopped, is that it was dystopian or even apocalyptic, because it turned out to be very dual-use. If you build nuclear power-plants, it’s only one step away from building nuclear weapons. And it turned out to be a lot trickier to separate those two things out than it looked. I think the signature moment was 1974 or 1975 when India gets the nuclear bomb.The US, I believe, had transferred the nuclear reactor technology to India, we thought they couldn’t weaponize it, and then it turned out it was pretty easy to weaponize. And then the geopolitical problem with nuclear power was that you either need a double standard where we have nuclear power the US but we don’t allow other countries to have nuclear power, because the US gets to keep its nuclear weapons, we don’t let hundreds of other countries have nuclear weapons, that’s an extreme double standard. Probably a little bit hard to justify. Or, you need some kind of really effective global governance where you have a one-world government that regulates this stuff. That doesn’t sound good either.
The compromise was to regulate it so much that, you know, the nuclear plants got grandfathered in, but it became too expensive to build new ones. [Cut discussion of China’s nuke policy.]
Rogan
And if there was innovation, if nuclear engineering had gotten to a point where, let’s say there was no 3-mile-island or Chernobyl didn’t happen, do you think it would have gotten to a much more efficient or effective version by now?Thiel
[Cut discussion about the practical designs] The problem you have is still this dual-use problem. My alternate history of what went wrong with nuclear power wasn’t three-mile-island, it wasn’t Chernobyl, that’s the official story. The real story was India getting the bomb.There’s always a big picture question. People ask me, if I’m right about this picture, this slow-down in tech, there’s always the question “Why did this happen?” And my cop-out answer is always that ‘why’ questions are overdetermined, because there’s always tons of reasons and factors. It could be that we became a more feminized, risk-averse society. It could be that the education system worked less well. It could be that we’re just out of ideas, the easy ideas have been found, nature’s cupboard is bare, the low-hanging fruit have been picked.
But I think one dimension that’s not to be underestimated for the science and tech stagnation is that an awful lot of science and technology had this dystopian or apocalyptic dimension. And probably what happened at Los Alamos in 1945 and then with the thermonuclear weapons in the early 50’s, it took a while for it to really seep in, but it had this sort of delayed effect where, you know, maybe a stagnant world in which the physicists don’t get to do anything and they have to putter around with DEI, but you don’t build weapons that blow up the world any more. Is that a feature or a bug? The stagnation was sort of a response.
It sucks that we’ve lived in this world for 50 years where a lot of stuff has been inert, but if we had a world that was still accelerating on all of these dimensions, with supersonic and hypersonic planes and hypersonic weapons and modular nuclear reactors, maybe we wouldn’t be sitting here and the whole world would be blown up. So we’re in that stagnant path of the multiverse because it had this partially protective thing, even though in all these other ways I feel it has deeply deranged our society.
Commentary
Arguments About China
I respect Thiel’s epistemic process in the discussion of racing with China. He is someone who I expected is substantially invested in various AI companies doing well (e.g. was a founding investor in OpenAI and also a major investor in DeepMind), yet he honestly tried to give the strongest argument he could against racing with China when the topic was being discussed.
I am interested to see a link to the best paper or research analysis that the western AI policy scene has produced of arguments why China will actually not be competitive in the AI race. Perhaps there are good ones around, but I have some suspicion that the people involved are somehow doing worse at the public discourse on this issue than one of the leading venture capitalists who has been funding tech progress in AI...
Winning the Arguments
Hearing him talk about Effective Altruists brought to mind this paragraph from SlateStarCodex:
One is reminded of the old joke about the Nazi papers. The rabbi catches an old Jewish man reading the Nazi newspaper and demands to know how he could look at such garbage. The man answers “When I read our Jewish newpapers, the news is so depressing – oppression, death, genocide! But here, everything is great! We control the banks, we control the media. Why, just yesterday they said we had a plan to kick the Gentiles out of Germany entirely!”
I was somewhat pleasantly surprised to learn that one of the people who has been a major investor in AI companies and a major political intellectual influence toward tech and scientific acceleration believes that “the scary, dystopian AI narrative is way more compelling” and of “the Effective Altruist people” says “I think this time around they are winning the arguments”.
Winning the arguments is the primary mechanism by which I wish to change the world.
- ^
I have no interest in this discussion of aliens and do not believe the hypothesis is worth privileging. I point you to (and endorse) the bets on this that many LessWrongers have made of up to $150k against the hypothesis.
Worth noting that the FDA’s good job on thalidomide happened before the most recent major round of standards-tightening. Not because of it. That good job is not necessarily much evidence that the FDA since thalidomide is similarly well equipped to do a good job. Which, I think we saw when looking at what passed for “warp speed” in 2020-2021.
I think Thiel is wrong on this. Nuclear power plants are as close to dual use as they are in part because they are descended from military reactors designed to produce material for weapons. We created the NRC and effectively banned design improvements before civilian reactor research got all that far. Today we have a lot of improved reactor designs that are much further from dual use, much more resistant to catastrophic failure, much easier to scale to smaller size, and that produce much less waste, but never allowed ourselves to build them. It’s now been long enough that almost everyone who was an adult during Three Mile Island, and most who were adults during Chernobyl, has since retired, and so maybe the clear growing need for more baseload clean power will finally be able to overcome the regulatory barriers and restart development and deployment of new nuclear with better, more modern tech.
Being from Germany myself, I think the anti-nuclear movement in Germany that was against nuclear weapons on German soil was also very much against nuclear power plants because they saw the connection. To me that seems one of the reasons why Germany is much more anti-nuclear than other EU countries.
As far as the substance goes, nuclear power plants being dual-use is not the biggest concern. The biggest concern is that uranium enrichment is dual-use. A facility that can enrich uranium enough to be useful for nuclear reactors can also be used to enrich it to be weapon-grade.
When thinking about how to deal with Iran’s nuclear program, people thought about making a deal where Iran can have nuclear power plants but no uranium enrichment facilities and gets the enriched uranium from the outside, because the uranium enrichment facilities are the key concern.
I agree on the resistance to failure and less waste production, but disagree on dual use.
Thorium produces uranium-233 which can be used for nuclear reactions. Unlike uranium 235 based energy reactors, thorium produces more uranium-233 than it consumes in the course of producing energy. With thorium reactors, all energy reactors will be producing weapons grade nuclear material. This may be less efficient than traditional reactors dedicated to making nuclear weapons material, but converting a thorium energy plant from energy to weapons making is more trivial.
And if as you say these new reactors design are more simple and small, the capital costs will be much lower, and since thorium is abundant the operational costs are much lower, so the plants will be more spread out geographically and new nations will get it. Overall the headache to global intelligence agencies is much higher.
I also think beyond these specific objections, the dual-use nature nuclear is “overdetermined”. There’s an amusing part of the interview where Thiel points out that the history of industrial advancement was moving from energy sources that take up more space to ones that take up less, from wood to coal to oil to nuclear. and now we’re moving back to natural gas which takes up more space and solar panels that take up a lot of land. Anyways, the atom fundamentally has a lot of energy in it, E=mc2. but massive amounts of energy in a small space is easy to turn into large explosions. The thing that makes nuclear attractive is the same thing that makes it dangerous. There’s been incredible technical progress in preventing nuclear accidents but preventing nuclear weapons requires geopolitical solutions.
I was thinking more about advanced uranium reactor designs rather than thorium. For example, a lot of SMR designs are sealed, making it harder to access fuel/waste during the lifetime or modify operation. Some are also fast neutron reactors, burners not breeders. That means they contain less total fissile material initially than they otherwise would, and consume a large proportion of what would otherwise be fissile or long lived waste.
Yes, you do have to be concerned about people opening them up and modifying them to breeder reactors—but honestly, I think that “Don’t allow sales to people who will do that, and also require monitoring to prevent modification” is enough to deter most of the problems, and for what’s left, the difference between being able to do that and being able to figure it out for yourself is not nearly as high a hurdle as it was 50-70 years ago.
This reeks of soldier mindset, instead of just ignoring that part of the transcript, you felt the need to seek validation in your opposing opinion by telling us what to think in an unrelated section. The readers can think for themselves and do not need your help to do so.
Don’t think I agree with your psychological narrative (I was writing fast and felt some desire to justify why I cut a large chunk of dialogue). But I agree it’s not important to include, and I’ve moved it to a footnote.
I’m sorry, I read the tone of it ruder than it was intended.
So if I got the gist of that correctly, Thiel is increasingly sympathetic to AI doomers, because he doesn’t want to get sent to the glue factory by Skynet, but he thinks humanity will succeed in shutting things down, rather than the AIs triumphing.
Fellow Thiel fans may be interested in this post of mine called “X-Risk, Anthropics, & Peter Thiel’s Investment Thesis”, analyzing Thiel’s old essay “The Optimistic Thought Experiment”, and trying to figure out how he thinks about the intersection of markets and existential risk.
Quick mod note: This was sort of on the edge of “frontpage” vs “personal blog.” I think it’s mostly talking about big picture trends, but has some elements of community inside-baseball and time sensitive politics. I decided to frontpage it but wasn’t that sure.