A guide might be how humans have done it up to now. Historically, humans have tended to be reluctant to grant full privileges of humanity even to other humans where they could possibly gain advantage for themselves or their group, until the other humans in question have actually figured out how to shoot back. This may itself be a convincing practical reason to treat AIs nicely.
I’m not sure that an AI would necessarily realize that punching back is the obvious answer. However, I do agree that if you are using evolution or some similar process, then you run the risk of eventually creating one that will. Hence my argument below that this is a bad idea.
A guide might be how humans have done it up to now. Historically, humans have tended to be reluctant to grant full privileges of humanity even to other humans where they could possibly gain advantage for themselves or their group, until the other humans in question have actually figured out how to shoot back. This may itself be a convincing practical reason to treat AIs nicely.
I’m not sure that an AI would necessarily realize that punching back is the obvious answer. However, I do agree that if you are using evolution or some similar process, then you run the risk of eventually creating one that will. Hence my argument below that this is a bad idea.