I was referring to the fact that the AI creates a copy of itself to modify. To me, this implies that the copies (and, by extension the ‘original’ AI) have a line of code that allows for itself to be modified by copies of itself.
I suppose the AI could create copies of itself in a box and experiment on them without their consent. Imprisoning perfect copies of yourself and performing potentially harmful modifications on them strikes me as insane, though. related: http://lesswrong.com/lw/1pz/ai_in_box_boxes_you/
I was referring to the fact that the AI creates a copy of itself to modify. To me, this implies that the copies (and, by extension the ‘original’ AI) have a line of code that allows for itself to be modified by copies of itself.
I suppose the AI could create copies of itself in a box and experiment on them without their consent. Imprisoning perfect copies of yourself and performing potentially harmful modifications on them strikes me as insane, though. related: http://lesswrong.com/lw/1pz/ai_in_box_boxes_you/
That’s what I meant.
Why? It might suck for the AI, but that only matters if the AI puts a large value on its own happiness.
Hmm, I seem to anthropomorphized my imaginary AI. Your rebuttal sounds right.