An outcome is “okay” if it gets at least 20% of the maximum attainable cosmopolitan value that could’ve been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don’t suffer death or any other awful fates.
So Yudkowsky is not exactly shy about expressing his opinion that outcomes in which humanity is left alive but with only crumbs on the universal scale is not acceptable to him.
It’s not acceptable to him, so he’s trying to manipulate people into thinking existential risk is approaching 100% when it clearly isn’t. He pretends there aren’t obvious reasons AI would keep us alive, and also pretends the Grabby Alien Hypothesis is fact (so people think alien intervention is basically impossible), and also pretends there aren’t probably sun-sized unknown-unknowns in play here.
If it weren’t so transparent, I’d appreciate that it could actually trick the world into caring more about AI-safety, but if it’s so transparent that even I can see through it, then it’s not going to trick anyone smart enough to matter.
As a concrete note on this, Yudkowsky has a Manifold market If Artificial General Intelligence has an okay outcome, what will be the reason?
So Yudkowsky is not exactly shy about expressing his opinion that outcomes in which humanity is left alive but with only crumbs on the universal scale is not acceptable to him.
It’s not acceptable to him, so he’s trying to manipulate people into thinking existential risk is approaching 100% when it clearly isn’t. He pretends there aren’t obvious reasons AI would keep us alive, and also pretends the Grabby Alien Hypothesis is fact (so people think alien intervention is basically impossible), and also pretends there aren’t probably sun-sized unknown-unknowns in play here.
If it weren’t so transparent, I’d appreciate that it could actually trick the world into caring more about AI-safety, but if it’s so transparent that even I can see through it, then it’s not going to trick anyone smart enough to matter.