we’re increasing the risk that some kinds of AI (ones that value their own existence more than human life) will destroy us preemptively.
If we make an AI like that in the first place, then we’ve pretty much already lost. If it ever had to destroy us to protect its ability to carry out its top-level goals, then its top-level goals are clearly something other than humaneness, in which case it’ll most likely wipe us out when it’s powerful enough whether or not we ever felt threatened by it.
If we make an AI like that in the first place, then we’ve pretty much already lost. If it ever had to destroy us to protect its ability to carry out its top-level goals, then its top-level goals are clearly something other than humaneness, in which case it’ll most likely wipe us out when it’s powerful enough whether or not we ever felt threatened by it.