The ultimate hazard from an AI that is set up to even possibly be useful is when an AI is set up to transmit a (very limited) message before self-destruct (and so that the AI cannot witness the result of any of its actions including that message) and that message is still hazardous.
The self-fulfilling prophecy has been well-known in fiction for centuries. Or the ambiguous prophecy—consider what is said to have happened when Croesus asked the Oracle whether he should attack the Persians. “If you attack,” the Oracle reputedly said, “you will destroy a great empire.” Wanting to destroy the great Persian empire, and encouraged by this answer, Croesus immediately attacked...
...an action which led to the Persians promptly destroying Croesus’ empire.
Prophecy can be a weapon, and it can be turned against those who know what the prophecy says.
The self-fulfilling prophecy has been well-known in fiction for centuries. Or the ambiguous prophecy—consider what is said to have happened when Croesus asked the Oracle whether he should attack the Persians. “If you attack,” the Oracle reputedly said, “you will destroy a great empire.” Wanting to destroy the great Persian empire, and encouraged by this answer, Croesus immediately attacked...
...an action which led to the Persians promptly destroying Croesus’ empire.
Prophecy can be a weapon, and it can be turned against those who know what the prophecy says.