In the worlds where we have AI doom it’s likely because of large amounts of easy optimization slack that AGI exploits leading to hard takeoff, or perhaps coordination failures and deceptive alignment in slower takeoff scenarios. Either way there doesn’t seem to be much one can do about that other than contribute to AI safety.
Contrast to nuclear war, where more concrete conventional preparation like bomb shelters and disaster survival preparation has at least some non-epsilon expected payout.
Also, most of the current leaders/experts in AI don’t put much probability on doom compared to LW folks.
In the worlds where we have AI doom it’s likely because of large amounts of easy optimization slack that AGI exploits leading to hard takeoff, or perhaps coordination failures and deceptive alignment in slower takeoff scenarios. Either way there doesn’t seem to be much one can do about that other than contribute to AI safety.
Contrast to nuclear war, where more concrete conventional preparation like bomb shelters and disaster survival preparation has at least some non-epsilon expected payout.
Also, most of the current leaders/experts in AI don’t put much probability on doom compared to LW folks.