In light of the news that apparently someone or something is hacking into automated factory control systems, I would like to suggest that the apocalypse threat level be increased from Guarded (lots of curious programmers own fast computers) to Elevated (deeply nonconclusive evidence consistent with a hard takeoff actively in progress).
It looks a little odd for a hard takeoff scenario—it seems to be prevalent only in Iran, it seems configured to target a specific control system, and it uses 0-days wastefully (I see a claim that it uses four 0-days and 2 stolen certificates). On the other hand, this is not inconsistent with an AI going after a semiconductor manufacturer and throwing in some Iranian targets as a distraction.
My preference ordering is friendly AI, humans, unfriendly AI; my probability ordering is humans, unfriendly AI, friendly AI.
In light of the news that apparently someone or something is hacking into automated factory control systems, I would like to suggest that the apocalypse threat level be increased from Guarded (lots of curious programmers own fast computers) to Elevated (deeply nonconclusive evidence consistent with a hard takeoff actively in progress).
It looks a little odd for a hard takeoff scenario—it seems to be prevalent only in Iran, it seems configured to target a specific control system, and it uses 0-days wastefully (I see a claim that it uses four 0-days and 2 stolen certificates). On the other hand, this is not inconsistent with an AI going after a semiconductor manufacturer and throwing in some Iranian targets as a distraction.
My preference ordering is friendly AI, humans, unfriendly AI; my probability ordering is humans, unfriendly AI, friendly AI.