As I’ve suggested before, one of the less drastic forms that a “pivotal act” could take (if we got to the point where one was needed: currently most governments appear to be taking AI risk fairly seriously) is a competent well-documented demonstration of “here’s how an ASI could take over the world/defeat humanity if it wanted to” (preferably a demonstration that doesn’t actually kill anyone). What you discuss is the other half of that: “an AGI that clearly wanted to take over the world/defeat humanity, but wasn’t in fact up to pulling it off correctly”.
I also, sadly, agree that we as a society might not pay much attention until hundreds of people or more die from one of these. Or it might be that the level of public concern is already high enough that we would.
As I’ve suggested before, one of the less drastic forms that a “pivotal act” could take (if we got to the point where one was needed: currently most governments appear to be taking AI risk fairly seriously) is a competent well-documented demonstration of “here’s how an ASI could take over the world/defeat humanity if it wanted to” (preferably a demonstration that doesn’t actually kill anyone). What you discuss is the other half of that: “an AGI that clearly wanted to take over the world/defeat humanity, but wasn’t in fact up to pulling it off correctly”.
I also, sadly, agree that we as a society might not pay much attention until hundreds of people or more die from one of these. Or it might be that the level of public concern is already high enough that we would.