Here’s a transcript. Sorry for the slight innacuracies, I got Whisper-small to generate it using this notebook someone made. Here’s the section about MIRI and Bostrom.
But but I, you know, I was involved peripherally with some of these sort of East Bay rationalist futuristic groups. There was one called the Singularity Institute in the 2000s and the sort of the self-understanding was, you know, building an AGI, it’s going to be this most, the most important technology in the history of the world. We better make sure it’s friendly to human beings and we’re going to work on making sure that it’s friendly. And you know, the vibe sort of got a little bit stranger and I think it was around 2015 that I sort of realized that, that they weren’t really, they didn’t seem to be working that hard on the AGI anymore and they seemed to be more pessimistic about where it was going to go and it was kind of a, it sort of devolved into sort of a Burning Man, Burning Man camp. It was sort of, had gone from sort of transhumanist to Luddite in, in 15 years. And some, something had sort of gone wrong. My, and it was finally confirmed to me by, by a post from Mary, Machine Intelligence Research Institute, the successor organization in April of this year. And this is again, these are the people who are, and this is sort of the cutting edge thought leaders of the, of the people who are pushing AGI for the last 20 years and, and you know, it was fairly important in the whole Silicon Valley ecosystem. Title, Mary announces new death with dignity strategy. And then the summary, it’s obvious at this point that humanity isn’t going to solve the alignment problem. I, how is AI aligned with humans or even try very hard or even go out with much of a fight. Since survival is unattainable, we should shift the focus of our efforts to helping humanity die with slightly more dignity. And, and then anyway, it goes on to talk about why it’s only slightly more dignity because people are so pathetic and they’ve been so lame at dealing with this. And of course you can, you know, there’s probably a lot you can say that, you know, this was, there’s somehow, this was somehow deeply in the logic of the whole AI program for, for decades that it was, was potentially going to be very dangerous. If you believe in Darwinism or Machiavellianism, there are no purely self-interested actors. And then, you know, if you get a superhuman AGI, you will never know that it’s aligned. So there was something, you know, there was a very deep problem. People have had avoided it for 20 years or so. At some point, one day they wake up and the best thing we can do is, is, is just hand out some Kool-Aid a la People’s Temple to everybody or something like this. And, and if we, and then I think, unless we just dismiss this sort of thing as, as just, as just the kind of thing that happens in a, in a, in a post-COVID mental breakdown world. I found another article from Nick Bostrom who’s sort of an Oxford academic. And, you know, most of these people are sort of, I know there’s, there’s somehow, they’re interesting because they have nothing to say. They’re interesting because they’re just mouthpieces. There’s, it’s like the mouth of Sauron. It’s, it’s just sort of complete sort of cogs in the machine, but they are, they’re useful because they tell us exactly where the zeitgeist is in some ways. And, and, and this was from 2019 pre-COVID, the vulnerable world hypothesis. And that goes through, you know, a whole litany of these different ways where, you know, science and technology are creating all these dangers for the world. And what do we do about them? And it’s the precautionary principle, whatever that means. But then, you know, he has a four-part program for achieving stabilization. And I will just read off the four things you need to do to make our world less vulnerable and achieve stabilization in the sort of, you know, we have this exponentiating technology where maybe it’s not progressing that quickly, but still progressing quickly enough. There are a lot of dangerous corner cases. You only need to do these four things to, to stabilize the world. Number one, restrict technological development. Number two, ensure that there does not exist a large population of actors representing a wide and recognizably human distribution of motives. So that’s a, that sounds like a somewhat incompatible with the DEI, at least in the, in the ideas form of diversity. Number three, establish extremely effective preventive policing. And number four, establish effective global governance. Since you can’t let, you know, even if there’s like one little island somewhere where this doesn’t apply, it’s no good. And, and so it is basic, and this is, you know, this is the zeitgeist on the other side.
It’s completely unclear to me whether he actually thinks there is a risk to humanity from superhuman AI, and if so, what he thinks could or should be done about it.
For example, is he saying that “you will never know that [superhuman AGI] is aligned” truly is “a very deep problem”? Or is he saying that this is a pseudo-problem created by following the zeitgeist or something?
Similarly, what is his point about Darwinism and Machiavellianism? Is he saying, because that’s how the world works, superhuman AI is obviously risky? Or is he saying that these are assumptions that create the illusion of risk??
In any case, Thiel doesn’t seem to have any coherent message about the topic itself (as opposed to disapproving of MIRI and Nick Bostrom). I don’t find that completely surprising. It would be out of character for a politically engaged, technophile entrepreneur to say “humanity’s latest technological adventure is its last, we screwed up and now we’re all doomed”.
His former colleague Elon Musk speaks more clearly—“We are not far from dangerously strong AI” (tweeted four days ago) - and he does have a plan—if you can’t beat them, join them, by wiring up your brain (i.e. Neuralink).
Here’s a transcript. Sorry for the slight innacuracies, I got Whisper-small to generate it using this notebook someone made. Here’s the section about MIRI and Bostrom.
It’s completely unclear to me whether he actually thinks there is a risk to humanity from superhuman AI, and if so, what he thinks could or should be done about it.
For example, is he saying that “you will never know that [superhuman AGI] is aligned” truly is “a very deep problem”? Or is he saying that this is a pseudo-problem created by following the zeitgeist or something?
Similarly, what is his point about Darwinism and Machiavellianism? Is he saying, because that’s how the world works, superhuman AI is obviously risky? Or is he saying that these are assumptions that create the illusion of risk??
In any case, Thiel doesn’t seem to have any coherent message about the topic itself (as opposed to disapproving of MIRI and Nick Bostrom). I don’t find that completely surprising. It would be out of character for a politically engaged, technophile entrepreneur to say “humanity’s latest technological adventure is its last, we screwed up and now we’re all doomed”.
His former colleague Elon Musk speaks more clearly—“We are not far from dangerously strong AI” (tweeted four days ago) - and he does have a plan—if you can’t beat them, join them, by wiring up your brain (i.e. Neuralink).
Of course ,Mary should be *MIRI”.