In general, even in the rationality community, people’s reactions, including my own, to the fact that doom seems imminent—whether it’s in 5 years or 50 years—seems much too small. I wonder how much of this is because it feels science fiction-y.
If it was nuclear war, would that change things? An asteroid hitting? What about whether it is mainstream people vs non-mainstream people pulling the alarm? If a majority of mainstream academics were pulling the alarm on an asteroid hitting in the next 5-50 years, would reactions be different?
In the worlds where we have AI doom it’s likely because of large amounts of easy optimization slack that AGI exploits leading to hard takeoff, or perhaps coordination failures and deceptive alignment in slower takeoff scenarios. Either way there doesn’t seem to be much one can do about that other than contribute to AI safety.
Contrast to nuclear war, where more concrete conventional preparation like bomb shelters and disaster survival preparation has at least some non-epsilon expected payout.
Also, most of the current leaders/experts in AI don’t put much probability on doom compared to LW folks.
In general, even in the rationality community, people’s reactions, including my own, to the fact that doom seems imminent—whether it’s in 5 years or 50 years—seems much too small. I wonder how much of this is because it feels science fiction-y.
If it was nuclear war, would that change things? An asteroid hitting? What about whether it is mainstream people vs non-mainstream people pulling the alarm? If a majority of mainstream academics were pulling the alarm on an asteroid hitting in the next 5-50 years, would reactions be different?
In the worlds where we have AI doom it’s likely because of large amounts of easy optimization slack that AGI exploits leading to hard takeoff, or perhaps coordination failures and deceptive alignment in slower takeoff scenarios. Either way there doesn’t seem to be much one can do about that other than contribute to AI safety.
Contrast to nuclear war, where more concrete conventional preparation like bomb shelters and disaster survival preparation has at least some non-epsilon expected payout.
Also, most of the current leaders/experts in AI don’t put much probability on doom compared to LW folks.