AI: It seems like there has been nothing like a ‘fire alarm’ for this, and yet for instance most random ML authors alike agree that there is a serious risk.
“most ML authors agree risk of extinction-level bad >= 5%” seems not the same as “most ML authors agree risk of extinction-level stuff is serious”.
“most ML authors agree risk of extinction-level bad >= 5%” seems not the same as “most ML authors agree risk of extinction-level stuff is serious”.