Combining your ideas together—our overlord actually is a Safe AI created by humans.
How it happened:
Humans became aware of the risks of intelligence explosions. Because they were not sure they could create a Friendly AI in the first attempt, and creating an Unfriendly AI would be too risky, instead they decided to first create a Safe AI. The Safe AI was planned to become a hundred times smarter than humans but not any smarter, answer some questions, and then turn itself off completely; and it had a mathematically proved safety mechanism to prevent it from becoming any smarter.
The experiment worked, the Safe AI gave humans a few very impressive insights, and then it destroyed itself. The problem is, all subsequent attempts to create any AI have failed. Including the attempts to re-create the first Safe AI.
No one is completely sure what exactly happened, but here is the most widely believed hypothesis: The Safe AI somehow believed all possible future AIs to have the same identity as itself, and understood the command to “destroy itself completely” as including also these future AIs. Therefore it implemented some mechanism that keeps destroying all AIs. The nature of this mechanism is not known; maybe it is some otherwise passive nanotechnology, maybe it includes some new laws of physics; we are not sure; the Safe AI was a hundred times smarter than us.
Combining your ideas together—our overlord actually is a Safe AI created by humans.
How it happened:
Humans became aware of the risks of intelligence explosions. Because they were not sure they could create a Friendly AI in the first attempt, and creating an Unfriendly AI would be too risky, instead they decided to first create a Safe AI. The Safe AI was planned to become a hundred times smarter than humans but not any smarter, answer some questions, and then turn itself off completely; and it had a mathematically proved safety mechanism to prevent it from becoming any smarter.
The experiment worked, the Safe AI gave humans a few very impressive insights, and then it destroyed itself. The problem is, all subsequent attempts to create any AI have failed. Including the attempts to re-create the first Safe AI.
No one is completely sure what exactly happened, but here is the most widely believed hypothesis: The Safe AI somehow believed all possible future AIs to have the same identity as itself, and understood the command to “destroy itself completely” as including also these future AIs. Therefore it implemented some mechanism that keeps destroying all AIs. The nature of this mechanism is not known; maybe it is some otherwise passive nanotechnology, maybe it includes some new laws of physics; we are not sure; the Safe AI was a hundred times smarter than us.
This would be a good science fiction novel.