I’m not that invested in defending the p>99% thing; as Yudkowsky argues in this tweet:
If you want to trade statements that will actually be informative about how you think things work, I’d suggest, “What is the minimum necessary and sufficient policy that you think would prevent extinction?”
I see the business-as-usual default outcome as AI research progressing until unaligned AGI, resulting in an intelligence explosion, and thus extinction. That would be the >99% thing.
The kinds of minimum necessary and sufficient policies I can personally imagine which might possibly prevent that default outcome, would require institutions laughably more competent than what we have, and policies utterly outside the Overton window. Like a global ban on AI research plus a similar freeze of compute scaling, enforced by stuff like countries credibly threatening global nuclear war over any violations. (Though probably even that wouldn’t work, because AI research and GPU production cannot be easily detected via inspections and surveillance, unlike the case of producing nuclear weapons.)
My initial comment isn’t really arguing for the >99% thing. Most of that comes from me sharing the same so-called pessimistic (I would say realistic) expectations as some LWers (e.g. Yudkowsky’s AGI Ruin: A List of Lethalities) that the default outcome of AI progress is unaligned AGI → unaligned ASI → extinction, that we’re fully on track for that scenario, and that it’s very hard to imagine how we’d get off that track.
No, I didn’t mean it like that. I meant that we’re currently (in 2025) in the >99% doom scenario, and I meant it seemed to me like we were overdetermined (even back in e.g. 2010) to end up in that scenario (contra Ruby’s “doomed for no better reason than because people were incapable of not doing something”), even if some stuff changed, e.g. because some specific actors like our leading AI labs didn’t come to exist. Because we’re in a world where technological extinction is possible and the default outcome of AI research, and our civilization is fundamentally unable to grapple with that fact. Plus a bunch of our virtues (like democracy, or freedom of commerce) turn from virtue to vice in a world where any particular actor can doom everyone by doing sufficient technological research; we have no mechanism whereby these actors are forced to internalize these negative externalities of their actions (like via extinction insurance or some such).
I don’t understand this part. Do you mean an alternative world scenario where compute and AI progress had been so slow, or the compute and algorithmic requirements for AGI had been so high, that our median expected time for a technological singularity would be around the year 2070? I can’t really imagine a coherent world where AI alignment progress is relatively easier to accomplish than algorithmic progress (e.g. AI progress yields actual feedback, whereas AI alignment research yields hardly any feedback), so wouldn’t we then in 2067 just be in the same situation as we are now?
I don’t understand the world model where that prevents any negative outcomes. For instance, AI labs like OpenAI currently argue that they should be under zero regulations, and even petitioned the US government to be exempted from regulation; and the current US government itself cheerleads race dynamics and is strictly against safety research. Even if some AI labs voluntarily submitted themselves to some kinds of standards, that wouldn’t help anyone when OpenAI and the US government don’t play ball.
(Not to mention that the review board would inevitably be captured by interests like anti-AI-bias stuff, since there’s neither sufficient expertise nor a sufficient constituency for anti-extinction policies.)
That’s a disbelief in superintelligence. You need to deflect the asteroid (prevent unaligned ASI from coming into being) long before it crashes into earth, not only when it’s already burning up in the atmosphere. From my perspective, the asteroid is already almost upon us (e.g. see the recent AI 2027 forecast), you’re just not looking at it, or you’re not understanding what you’re seeing.