It’s not acceptable to him, so he’s trying to manipulate people into thinking existential risk is approaching 100% when it clearly isn’t. He pretends there aren’t obvious reasons AI would keep us alive, and also pretends the Grabby Alien Hypothesis is fact (so people think alien intervention is basically impossible), and also pretends there aren’t probably sun-sized unknown-unknowns in play here.
If it weren’t so transparent, I’d appreciate that it could actually trick the world into caring more about AI-safety, but if it’s so transparent that even I can see through it, then it’s not going to trick anyone smart enough to matter.
What are you specifically planning to accomplish?
In a post-ASI world, the assumption that investment capital returns are honored by society is basically gone. Like the last game in a very long series of iterated prisoner’s dilemma, there’s no longer a need to Cooperate. There’s still time between now and then to invest, but the generic “more long-term capital = good” mindset seems insufficient without an exit strategy or final use case.
Personally, I’m trying to balance various risks regarding the choppy years right before ASI, and also maximize charitable outcomes while I still have some agency in this world.