Thiel’s argument against Bostrom’s Vulnerable World Hypothesis is basically “Well, Science might cause bad things, but totalitarianism might cause even worse stuff!”, which, sure, but Bostrom’s whole point is that we seem to be confronted with a choice between two very undesirable outcomes: either technology kills us or we become totalitarian. Either we risk death from cancer, or we risk death from chemotherapy. Thiel implicitly agrees with this frame, it’s just that he thinks the cure worse than the disease, he doesn’t offer some third option or argue that science is less dangerous than Bostrom believes.
He also unfortunately doesn’t offer much against Elizer’s “Death With Dignity” post, no specific technical counterarguments, just some sneering and “Can you believe these guys?” stuff. I don’t think Thiel would be capable of recognizing the End of The World as such 5 years before it happens. However his point about the weirdness of bay area rationalists is true, though not especially new.
The best arguments against the VWH solution is in the post Enlightenment values in a vulnerable world, especially once we are realistic about what incentives states are under:
The above risks arise from a global state which is loyally following its mandate of protecting humanity’s future from dangerous inventions. A state which is not so loyal to this mandate would still find these tools for staying in power instrumental, but would use them in pursuit of much less useful goals. Bostrom provides no mechanism for making sure that this global government stays aligned with the goal of reducing existential risk and conflates a government with the ability to enact risk reducing policies with one that will actually enact risk reducing policies. But the ruling class of this global government could easily preside over a catastrophic risk to their citizens and still enrich themselves. Even with strong-minded leaders and robust institutions, a global government with this much power is a single point of failure for human civilization. Power within this state will be sought after by every enterprising group whether they care about existential risk or not. All states today are to some extent captured by special interests which lead them to do net social harm for the good of some group. If the global state falls into the control of a group with less than global interests, the alignment of the state towards global catastrophic risks will not hold.
A state which is aligned with the interests of some specific religion, race, or an even smaller oligarchic group can preside over and perpetrate the killing of billions of people and still come out ahead with respect to its narrow interests. The history of government gives no evidence that alignment with decreasing global catastrophic risk is stable. By contrast, there is evidence that alignment with the interests of some powerful subset of constituents is essentially the default condition of government.
If Bostrom is right that minimizing existential risk requires a stable and powerful global government, then politicide, propaganda, genocide, scapegoating, and stagnation are all instrumental in pursuing the strategy of minimizing anthropogenic risk. A global state with this goal is therefore itself a catastrophic risk. If it disarmed other more dangerous risks, such a state could an antidote but whether it would do so isn’t obvious. In the next section we consider whether the panopticon government is likely to disarm many existential risks.
Beyond these two examples, a global surveillance state would be searching the urn specifically for black balls. This state would have little use for technologies which would improve the lives of the median person, and they would actively suppress those which would change the most important and high status factors of production. What they want are technologies which enhance their ability to maintain control over the globe. Technologies which add to their destructive and therefore deterrent power. Bio-weapons, nuclear weapons, AI, killer drones, and geo-engineering all fit the bill.
A global state will always see maintaining power as essential. A nuclear arsenal and an AI powered panopticon are basic requirements for the global surveillance state that Bostrom imagines. It is likely that such a state will find it valuable to expand its technological lead over all other organizations by actively seeking out black ball technologies. So in addition to posing an existential risk in and of itself, a global surveillance state would increase the risk from black ball technologies by actively seeking destructive power and preventing anyone else from developing antidotes.
Thiel’s arguments about both the Vulnerable World Hypothesis and Death with Dignity were so (uncharacteristically?) shallow that I had to question whether he actually believes what he said, or was just making an argument he thought would be popular with the audience. I don’t know enough about his views to say but my guess is that it’s somewhat (20%+) likely.
The VWH is very iffy. It can be generalized into fairly absurd conclusions. It’s like Pascal’s Mugging, but with unknown unknowns, which evades statistical analysis by definition.
“We don’t know if SCP-tier infohazards can result in human extinction. Every time we think a new thought, we’re reaching into an urn, and there is a chance that it will become both lethal and contagious. Yes, we don’t know if this is even possible, but we’re thinking a lot of new thoughts now adays. The solution to this is...”
“We don’t know if the next vaccine can result in human extinction. Every time we make a new vaccine, we’re reaching into an urn, and there is a chance that it will accidentally code for prions and kill everyone 15 years later. Or something we can’t even imagine right now. Yes, according to our current types of vaccines this is very unlikely, and our existing vaccines do in fact provide a lot of benefits, but we don’t know if the next vaccine we invent, especially if it’s using new techniques, will be able to slip past existing safety standards and cause human extinction. The solution to this is...”
“Since you can’t statistically analyze unknown unknowns, and some of them might result in human extinction, we shouldn’t explore anything without a totalitarian surveillance state”
I think Thiel detected an adversarial attempt to manipulate his decision-making and rejected it out of principle.
My main problem is the “unknown unknowns evade statistical analysis by definition” part. There is nothing we can do to satisfy the VWH except by completely implementing its directives. It’s in some ways argument-proof by design, since it incorporates unknown unknowns so heavily. Since nothing can be used to disprove the VWH, I reject it as a bad hypothesis.
Thiel’s argument against Bostrom’s Vulnerable World Hypothesis is basically “Well, Science might cause bad things, but totalitarianism might cause even worse stuff!”, which, sure, but Bostrom’s whole point is that we seem to be confronted with a choice between two very undesirable outcomes: either technology kills us or we become totalitarian. Either we risk death from cancer, or we risk death from chemotherapy. Thiel implicitly agrees with this frame, it’s just that he thinks the cure worse than the disease, he doesn’t offer some third option or argue that science is less dangerous than Bostrom believes.
He also unfortunately doesn’t offer much against Elizer’s “Death With Dignity” post, no specific technical counterarguments, just some sneering and “Can you believe these guys?” stuff. I don’t think Thiel would be capable of recognizing the End of The World as such 5 years before it happens. However his point about the weirdness of bay area rationalists is true, though not especially new.
The best arguments against the VWH solution is in the post Enlightenment values in a vulnerable world, especially once we are realistic about what incentives states are under:
Here’s a link to the longer version of the post.
https://forum.effectivealtruism.org/posts/A4fMkKhBxio83NtBL/enlightenment-values-in-a-vulnerable-world
Thiel’s arguments about both the Vulnerable World Hypothesis and Death with Dignity were so (uncharacteristically?) shallow that I had to question whether he actually believes what he said, or was just making an argument he thought would be popular with the audience. I don’t know enough about his views to say but my guess is that it’s somewhat (20%+) likely.
They are perfectly characteristically shallow, as usual for him.
The VWH is very iffy. It can be generalized into fairly absurd conclusions. It’s like Pascal’s Mugging, but with unknown unknowns, which evades statistical analysis by definition.
“We don’t know if SCP-tier infohazards can result in human extinction. Every time we think a new thought, we’re reaching into an urn, and there is a chance that it will become both lethal and contagious. Yes, we don’t know if this is even possible, but we’re thinking a lot of new thoughts now adays. The solution to this is...”
“We don’t know if the next vaccine can result in human extinction. Every time we make a new vaccine, we’re reaching into an urn, and there is a chance that it will accidentally code for prions and kill everyone 15 years later. Or something we can’t even imagine right now. Yes, according to our current types of vaccines this is very unlikely, and our existing vaccines do in fact provide a lot of benefits, but we don’t know if the next vaccine we invent, especially if it’s using new techniques, will be able to slip past existing safety standards and cause human extinction. The solution to this is...”
“Since you can’t statistically analyze unknown unknowns, and some of them might result in human extinction, we shouldn’t explore anything without a totalitarian surveillance state”
I think Thiel detected an adversarial attempt to manipulate his decision-making and rejected it out of principle.
My main problem is the “unknown unknowns evade statistical analysis by definition” part. There is nothing we can do to satisfy the VWH except by completely implementing its directives. It’s in some ways argument-proof by design, since it incorporates unknown unknowns so heavily. Since nothing can be used to disprove the VWH, I reject it as a bad hypothesis.
I found none of those quotes in https://nickbostrom.com/papers/vulnerable.pdf
When using quotation marks, please be more explicit where the quotes are from, if anywhere.
How VWH could be extrapolated is of course relevant and interesting; wouldn’t it make sense to pick an example from the actual text?
this is the same dude who has been funding Trump heavily, his claim that he doesn’t want totalitarianism is
obviouslyprobably nonsense