I agree with this part of Chrysophylax’s comment: “It’s not necessary when the UnFriendly people are humans using muscle-power weaponry.” Humans can be non-Friendly without immediately destroying the planet because humans are a lot weaker than a superintelligence. If you gave a human unlimited power, it would almost certainly make the world vastly worse than it currently is. We should be at least as worried, then, about giving an AGI arbitrarily large amounts of power, until we’ve figured out reliable ways to safety-proof optimization processes.
I agree with this part of Chrysophylax’s comment: “It’s not necessary when the UnFriendly people are humans using muscle-power weaponry.” Humans can be non-Friendly without immediately destroying the planet because humans are a lot weaker than a superintelligence. If you gave a human unlimited power, it would almost certainly make the world vastly worse than it currently is. We should be at least as worried, then, about giving an AGI arbitrarily large amounts of power, until we’ve figured out reliable ways to safety-proof optimization processes.