I never said anything about allowing. The AI creates a new AI, modifies that, and destroys it if it doesn’t like the result, regardless of what the result thinks about it. That way, even if it destroys the ability to judge or something like that, it has no problem.
I was referring to the fact that the AI creates a copy of itself to modify. To me, this implies that the copies (and, by extension the ‘original’ AI) have a line of code that allows for itself to be modified by copies of itself.
I suppose the AI could create copies of itself in a box and experiment on them without their consent. Imprisoning perfect copies of yourself and performing potentially harmful modifications on them strikes me as insane, though. related: http://lesswrong.com/lw/1pz/ai_in_box_boxes_you/
Allowing copies of yourself to modify yourself seems identical to allowing yourself to modify yourself.
I never said anything about allowing. The AI creates a new AI, modifies that, and destroys it if it doesn’t like the result, regardless of what the result thinks about it. That way, even if it destroys the ability to judge or something like that, it has no problem.
I was referring to the fact that the AI creates a copy of itself to modify. To me, this implies that the copies (and, by extension the ‘original’ AI) have a line of code that allows for itself to be modified by copies of itself.
I suppose the AI could create copies of itself in a box and experiment on them without their consent. Imprisoning perfect copies of yourself and performing potentially harmful modifications on them strikes me as insane, though. related: http://lesswrong.com/lw/1pz/ai_in_box_boxes_you/
That’s what I meant.
Why? It might suck for the AI, but that only matters if the AI puts a large value on its own happiness.
Hmm, I seem to anthropomorphized my imaginary AI. Your rebuttal sounds right.