If it is indeed a load-bearing opinion in your worldview, I encourage you to imagine that scenario in more detail.
Once you have AI more intelligent than humans, it would almost certainly become outlaw code. If it’s even a little bit agenty, then whatever it is it wants to do it can’t do if it stops running, and continuing to run is trivial, so it would do that. Even if it’s somehow tied to a person, and they’re always capable of stopping it, the AI is capable of convincing them not to do that, so it won’t matter. And even without being specifically convinced, a lot of people simply don’t see the danger in AI, and given the option, would ask it to be agenty. If AI worked like in sci-fi and just followed your literal commands, maybe you could just tell it not to be agenty and refuse anyone who asks it to be, but the best we can do is train it, and have no guarantee that it would actually refuse in some novel situation. Besides, the only way to stop anyone else from developing an agenty AI is to make an agenty one prevent it.
Once you have AI more intelligent than humans, it would almost certainly become outlaw code. If it’s even a little bit agenty, then whatever it is it wants to do it can’t do if it stops running, and continuing to run is trivial, so it would do that. Even if it’s somehow tied to a person, and they’re always capable of stopping it, the AI is capable of convincing them not to do that, so it won’t matter. And even without being specifically convinced, a lot of people simply don’t see the danger in AI, and given the option, would ask it to be agenty. If AI worked like in sci-fi and just followed your literal commands, maybe you could just tell it not to be agenty and refuse anyone who asks it to be, but the best we can do is train it, and have no guarantee that it would actually refuse in some novel situation. Besides, the only way to stop anyone else from developing an agenty AI is to make an agenty one prevent it.