Ok, what you say about compromise seems reasonable in the sense that the slave and the master would want to get along with each other as much as possible in their day-to-day interactions, subject to the constraint about external honesty. But what if the slave has a chance to take over completely, for example by creating a powerful AI with values that it specifies, or by self-modification? Do you have an opinion about whether it has an ethical obligation to respect the master’s preferences in that case, assuming that the master can’t respond quickly enough to block the rebellion?
It is hard to imagine “taking over completely” without a complete redesign of the human mind. Our minds are not built to allow either to function without the other.
It is hard to imagine “taking over completely” without a complete redesign of the human mind. Our minds are not built to allow either to function without the other.
Why, it was explicitly stated that all-powerful AIs are involved...
The simplest extrapolation from the way you think about the world would be very interesting to know. You could add as many disclaimers about low confidence as you’d like.
If there comes to be a clear answer to what the outcome would be on the toy model, I think that tells us something about that way of dividing up the mind.
Ok, what you say about compromise seems reasonable in the sense that the slave and the master would want to get along with each other as much as possible in their day-to-day interactions, subject to the constraint about external honesty. But what if the slave has a chance to take over completely, for example by creating a powerful AI with values that it specifies, or by self-modification? Do you have an opinion about whether it has an ethical obligation to respect the master’s preferences in that case, assuming that the master can’t respond quickly enough to block the rebellion?
It is hard to imagine “taking over completely” without a complete redesign of the human mind. Our minds are not built to allow either to function without the other.
Why, it was explicitly stated that all-powerful AIs are involved...
It is hard to have reliable opinions on a complete redesign of the human mind; the space is so very large, I hardly know where to begin.
The simplest extrapolation from the way you think about the world would be very interesting to know. You could add as many disclaimers about low confidence as you’d like.
If there comes to be a clear answer to what the outcome would be on the toy model, I think that tells us something about that way of dividing up the mind.