Sharing mostly because I personally didn’t realize how Thiel was viewing the Bay Area rationalists these days.
He explicitly calls out Eliezer’s “Death with Dignity” post as ridiculous, calls out Bostrom as out of touch, and says that the rationalists are only interesting because they aren’t saying anything, and are just an echo of prevailing feelings about technology.
I think it’s worthwhile really trying to step into Thiels point of view and update on it.
Thiel’s argument against Bostrom’s Vulnerable World Hypothesis is basically “Well, Science might cause bad things, but totalitarianism might cause even worse stuff!”, which, sure, but Bostrom’s whole point is that we seem to be confronted with a choice between two very undesirable outcomes: either technology kills us or we become totalitarian. Either we risk death from cancer, or we risk death from chemotherapy. Thiel implicitly agrees with this frame, it’s just that he thinks the cure worse than the disease, he doesn’t offer some third option or argue that science is less dangerous than Bostrom believes.
He also unfortunately doesn’t offer much against Elizer’s “Death With Dignity” post, no specific technical counterarguments, just some sneering and “Can you believe these guys?” stuff. I don’t think Thiel would be capable of recognizing the End of The World as such 5 years before it happens. However his point about the weirdness of bay area rationalists is true, though not especially new.
The best arguments against the VWH solution is in the post Enlightenment values in a vulnerable world, especially once we are realistic about what incentives states are under:
Here’s a link to the longer version of the post.
https://forum.effectivealtruism.org/posts/A4fMkKhBxio83NtBL/enlightenment-values-in-a-vulnerable-world
Thiel’s arguments about both the Vulnerable World Hypothesis and Death with Dignity were so (uncharacteristically?) shallow that I had to question whether he actually believes what he said, or was just making an argument he thought would be popular with the audience. I don’t know enough about his views to say but my guess is that it’s somewhat (20%+) likely.
They are perfectly characteristically shallow, as usual for him.
The VWH is very iffy. It can be generalized into fairly absurd conclusions. It’s like Pascal’s Mugging, but with unknown unknowns, which evades statistical analysis by definition.
“We don’t know if SCP-tier infohazards can result in human extinction. Every time we think a new thought, we’re reaching into an urn, and there is a chance that it will become both lethal and contagious. Yes, we don’t know if this is even possible, but we’re thinking a lot of new thoughts now adays. The solution to this is...”
“We don’t know if the next vaccine can result in human extinction. Every time we make a new vaccine, we’re reaching into an urn, and there is a chance that it will accidentally code for prions and kill everyone 15 years later. Or something we can’t even imagine right now. Yes, according to our current types of vaccines this is very unlikely, and our existing vaccines do in fact provide a lot of benefits, but we don’t know if the next vaccine we invent, especially if it’s using new techniques, will be able to slip past existing safety standards and cause human extinction. The solution to this is...”
“Since you can’t statistically analyze unknown unknowns, and some of them might result in human extinction, we shouldn’t explore anything without a totalitarian surveillance state”
I think Thiel detected an adversarial attempt to manipulate his decision-making and rejected it out of principle.
My main problem is the “unknown unknowns evade statistical analysis by definition” part. There is nothing we can do to satisfy the VWH except by completely implementing its directives. It’s in some ways argument-proof by design, since it incorporates unknown unknowns so heavily. Since nothing can be used to disprove the VWH, I reject it as a bad hypothesis.
I found none of those quotes in https://nickbostrom.com/papers/vulnerable.pdf
When using quotation marks, please be more explicit where the quotes are from, if anywhere.
How VWH could be extrapolated is of course relevant and interesting; wouldn’t it make sense to pick an example from the actual text?
this is the same dude who has been funding Trump heavily, his claim that he doesn’t want totalitarianism is
obviouslyprobably nonsenseI think he actually said that Bostrom represents the current zeitgeist, which is kind of the opposite of “out of touch”? (Unless he also said “out of touch”? Unfortunately I can’t find a transcript to do a search on.)
It’s ironic that everyone thinks of themselves as David fighting Goliath. We think we’re fighting unfathomably powerful economic forces (i.e., Moloch) trying to build AGI at any cost, and Peter thinks he’s fighting a dominant culture that remorselessly smothers any tech progress.
Here’s a transcript. Sorry for the slight innacuracies, I got Whisper-small to generate it using this notebook someone made. Here’s the section about MIRI and Bostrom.
It’s completely unclear to me whether he actually thinks there is a risk to humanity from superhuman AI, and if so, what he thinks could or should be done about it.
For example, is he saying that “you will never know that [superhuman AGI] is aligned” truly is “a very deep problem”? Or is he saying that this is a pseudo-problem created by following the zeitgeist or something?
Similarly, what is his point about Darwinism and Machiavellianism? Is he saying, because that’s how the world works, superhuman AI is obviously risky? Or is he saying that these are assumptions that create the illusion of risk??
In any case, Thiel doesn’t seem to have any coherent message about the topic itself (as opposed to disapproving of MIRI and Nick Bostrom). I don’t find that completely surprising. It would be out of character for a politically engaged, technophile entrepreneur to say “humanity’s latest technological adventure is its last, we screwed up and now we’re all doomed”.
His former colleague Elon Musk speaks more clearly—“We are not far from dangerously strong AI” (tweeted four days ago) - and he does have a plan—if you can’t beat them, join them, by wiring up your brain (i.e. Neuralink).
Of course ,Mary should be *MIRI”.
This feels like a conflict theory on corrupted hardware argument: AI risk people think they are guided by technical considerations, but the norm encompassing their behavior is the same as with everything else in technology, smothering progress instead of earnestly seeking a way forward, navigating the dangers.
So I think the argument is not about the technical considerations, which could well be mostly accurate, but a culture of unhealthy attitude towards them, shaping technical narratives and decisions. There’s been a recent post making a point of the same kind.
I watched that talk on youtube. My first impression was strongly that he was using hyperbole for driving the point to the audience; the talk was littered with the pithiest versions his positions. Compare with the series of talks he gave after Zero to One was released for the more general way he expresses similar ideas, and you can also compare with some of the talks that he gives to political groups. On a spectrum between a Zero to One talk and a Republican Convention talk, this was closer to the latter.
That being said, I wouldn’t be surprised if he was skeptical of any community that thinks much about x-risk. Using the 2x2 for definite-indefinite and optimism-pessimism, his past comments on American culture have been about losing definite optimism. I expect he would view anything focused on x-risk as falling into the definite pessimism camp, which is to say we are surely doomed and should plan against that outcome. By the most-coarse sorting my model of him uses, we fall outside of the “good guy” camp.
He didn’t say anything about this specifically in the talk, but I observe his heavy use of moral language. I strongly expect he takes a dim view of the prevalence of utilitarian perspectives in our neck of the woods, which is not surprising because it is something we and our EA cousins struggle with ourselves from time to time.
As a consequence, I fully expect him to view the rationality movement as people who are doing not-good-guy things and who use a suspect moral compass all the while. I think that is wrong, mind you, but it is what my simple model of him says.
It is easy to imagine outsiders having this view. I note people within the community have voiced dissatisfaction with the amount of content that focuses on AI stuff, and while strict utilitarianism isn’t the community consensus it is probably the best-documented and clearest of the moral calculations we run.
In conclusion, Thiel’s comments don’t cause me to update on the community because it doesn’t tell me anything new about us, but it does help firm up some of the dimensions along which our reputation among the public is likely to vary.
To me it sounds like Thiel is making a political argument against… diversity, wokeness, the general opposition against western civilization and technology… and pattern-matching everything to that. His argument sounds to me like this:
*
A true libertarian is never afraid of progress, he boldly goes forward and breaks things. You cannot separate dangerous research from useful research anyway; every invention is dual-use, so worrying about horrible consequences is silly, progress is always a net gain. The only reason people think about risks is political mindkilling.
I am disappointed that Bay Area rationalists stopped talking about awesome technology, and instead talk about dangers. Of course AI will bring new dangers, but it only worries you if you have a post-COVID mental breakdown. Note that even university professors, who by definition are always wrong and only parrot government propaganda, are agreeing about the dangers of AI, which means it is now a part of the general woke anti-technology attitude. And of course the proposed solution is world government and secret police controlling everyone! Even the Bible says that we should fear the Antichrist more than we fear Armageddon.
*
The charitable explanation is that he only pretends to be mindkilled, in order to make a political point.
I agree with your interpretation of Thiel. The guy is heavily involved in right-wing US politics, and that’s an essential piece of context for interpreting his actions and statements. He’s powerful, rich, smart and agentic. While we can interrogate his words at face value, it’s also fine to interpret them as a tool for manipulating perceptions of status. He has now written “Thiel’s summary of Bay Area rationalists,” and insofar as you’re exposed to and willing to defer to Thiel’s take, that is what your perception will be. More broadly, he’s setting what the values will be at the companies he runs, the political causes he supports, and garnering support for his vision by defining what he stands against. That’s a function separate from the quality of the reasoning in his words.
Thiel seems like a smart enough person to make a precise argument when he wants to, so when he loads his words with pop culture references and described his opponents as “the mouth of Sauron,” I think it’s right to start with the political analysis. Why bother reacting to Thiel if you’re mainly concerned with the content of his argument? It’s not like it’s especially new or original thinking. The reason to focus on Thiel is that you’re interested in his political maneuvers.
FWIW I’ve often heard him make precise arguments while also using LOTR references and metaphorical language like this, so I don’t think is is a sufficient trigger for “he must be making a political statement and not a reasoned one”.
I specifically said you can interpret his statement on the level of a reasoned argument. Based on your response, you could also update in favor of seeing even his more reason-flavored arguments as having political functions.
It seems like a cached speech from him. He echoes the same words at the Oxford Union earlier this month. I’m unsure how much this needs updating on. He constantly pauses and is occasionally inflammatory so my impression was he was measuring his words carefully for the audience.
dude has been funding trumpism, I wouldn’t really read much into what he says
edit 4mo later: https://johnganz.substack.com/p/the-enigma-of-peter-thiel
WTF downvotes! you wanna explain yourselves?
I’m guessing the problem is that you are advocating against dignifying the evil peddlers of bunkum by acknowledging them as legitimate debate partners.
oh hmm. thanks for explaining! I think I don’t universally agree with offering intellectual charity, especially to those with extremely large implementable agency differences, like thiel (and sbf, and musk, and anyone with a particularly enormous stake of power coupons, aka money). I’m extremely suspicious by default of such people, and the fact that thiel has given significantly to the trump project seems like strong evidence that he can’t be trusted to speak his beliefs, since he has revealed a preference for those who will take any means to power. my assertion boils down to “beware adversarial agency from trumpist donors”. perhaps it doesn’t make him completely ignorable, but I would still urge unusually much caution.
The exercise of figuring out what he could’ve meant doesn’t require knowing that he believes it. I think the point I formulated makes sense and is plausibly touching on something real, but it’s not an idea I would’ve spontaneously thought of on my own, so the exercise is interesting. Charity to something strange is often like that. I’m less clear on whether it’s really the point Thiel was making, and I have no idea if it’s something he believes, but that doesn’t seem particularly relevant.
fair enough!
See I just think it means he’s a shortsighted greedy moron
I mean I agree with that assessment. I do think that, hmm, it should be more possible to be direct about criticism on lesswrong without also dismissing the possibility of considering your interlocutor to be speaking meaningfully. Even though you’re agreeing with me, I do also agree with Nesov’s comment in way—if you can’t consider the possibility of adversarial agency without needing to bite back hard, you can’t evaluate it usefully.