I suppose the AI could create copies of itself in a box and experiment on them without their consent.
That’s what I meant.
Imprisoning perfect copies of yourself and performing potentially harmful modifications on them strikes me as insane, though.
Why? It might suck for the AI, but that only matters if the AI puts a large value on its own happiness.
Hmm, I seem to anthropomorphized my imaginary AI. Your rebuttal sounds right.
That’s what I meant.
Why? It might suck for the AI, but that only matters if the AI puts a large value on its own happiness.
Hmm, I seem to anthropomorphized my imaginary AI. Your rebuttal sounds right.