The VWH is very iffy. It can be generalized into fairly absurd conclusions. It’s like Pascal’s Mugging, but with unknown unknowns, which evades statistical analysis by definition.
“We don’t know if SCP-tier infohazards can result in human extinction. Every time we think a new thought, we’re reaching into an urn, and there is a chance that it will become both lethal and contagious. Yes, we don’t know if this is even possible, but we’re thinking a lot of new thoughts now adays. The solution to this is...”
“We don’t know if the next vaccine can result in human extinction. Every time we make a new vaccine, we’re reaching into an urn, and there is a chance that it will accidentally code for prions and kill everyone 15 years later. Or something we can’t even imagine right now. Yes, according to our current types of vaccines this is very unlikely, and our existing vaccines do in fact provide a lot of benefits, but we don’t know if the next vaccine we invent, especially if it’s using new techniques, will be able to slip past existing safety standards and cause human extinction. The solution to this is...”
“Since you can’t statistically analyze unknown unknowns, and some of them might result in human extinction, we shouldn’t explore anything without a totalitarian surveillance state”
I think Thiel detected an adversarial attempt to manipulate his decision-making and rejected it out of principle.
My main problem is the “unknown unknowns evade statistical analysis by definition” part. There is nothing we can do to satisfy the VWH except by completely implementing its directives. It’s in some ways argument-proof by design, since it incorporates unknown unknowns so heavily. Since nothing can be used to disprove the VWH, I reject it as a bad hypothesis.
The VWH is very iffy. It can be generalized into fairly absurd conclusions. It’s like Pascal’s Mugging, but with unknown unknowns, which evades statistical analysis by definition.
“We don’t know if SCP-tier infohazards can result in human extinction. Every time we think a new thought, we’re reaching into an urn, and there is a chance that it will become both lethal and contagious. Yes, we don’t know if this is even possible, but we’re thinking a lot of new thoughts now adays. The solution to this is...”
“We don’t know if the next vaccine can result in human extinction. Every time we make a new vaccine, we’re reaching into an urn, and there is a chance that it will accidentally code for prions and kill everyone 15 years later. Or something we can’t even imagine right now. Yes, according to our current types of vaccines this is very unlikely, and our existing vaccines do in fact provide a lot of benefits, but we don’t know if the next vaccine we invent, especially if it’s using new techniques, will be able to slip past existing safety standards and cause human extinction. The solution to this is...”
“Since you can’t statistically analyze unknown unknowns, and some of them might result in human extinction, we shouldn’t explore anything without a totalitarian surveillance state”
I think Thiel detected an adversarial attempt to manipulate his decision-making and rejected it out of principle.
My main problem is the “unknown unknowns evade statistical analysis by definition” part. There is nothing we can do to satisfy the VWH except by completely implementing its directives. It’s in some ways argument-proof by design, since it incorporates unknown unknowns so heavily. Since nothing can be used to disprove the VWH, I reject it as a bad hypothesis.
I found none of those quotes in https://nickbostrom.com/papers/vulnerable.pdf
When using quotation marks, please be more explicit where the quotes are from, if anywhere.
How VWH could be extrapolated is of course relevant and interesting; wouldn’t it make sense to pick an example from the actual text?