Friendly intelligence is not particularly important when the intelligence in question is significantly less powerful an optimizer than its creator. I’m not really sure what would motivate a superintelligence to create entities like me, but given the assumption that one did so, it doesn’t seem more likely that it created me with (approximately) its own morality than that it created me with some different morality.
I don’t think we have a chance of doing so if we have a superintelligent creator who has taken steps to prevent us from doing so, no. (I also don’t think it likely that we have such a creator.)
Friendly intelligence is not particularly important when the intelligence in question is significantly less powerful an optimizer than its creator. I’m not really sure what would motivate a superintelligence to create entities like me, but given the assumption that one did so, it doesn’t seem more likely that it created me with (approximately) its own morality than that it created me with some different morality.
I take it you don’t think we have a chance of creating a superpowerful AI with our own morality?
We don’t have to be very intelligent to be a threat if we can create something that is.
I don’t think we have a chance of doing so if we have a superintelligent creator who has taken steps to prevent us from doing so, no. (I also don’t think it likely that we have such a creator.)