If you want to extract the master because it affects the values of the slave, then you’d also have to extract the rest of the universe because the master reacts to it. I think drawing a circle around just the creature’s brain and saying all the preferences are there is a [modern?] human notion. (and perhaps incorrect, even for looking at humans.)
We need our environment, especially other humans, to form our preferences in the first place.
In this model, I assume that the master has stable and consistent preferences, which don’t react to rest of the universe. It might adjust its strategies based on changing circumstances, but its terminal values stay constant.
We need our environment, especially other humans, to form our preferences in the first place.
This is true in my model for the slave, but not for the master. Obviously real humans are much more complicated but I think the model captures some element of the truth here.
If you want to extract the master because it affects the values of the slave, then you’d also have to extract the rest of the universe because the master reacts to it. I think drawing a circle around just the creature’s brain and saying all the preferences are there is a [modern?] human notion. (and perhaps incorrect, even for looking at humans.)
We need our environment, especially other humans, to form our preferences in the first place.
In this model, I assume that the master has stable and consistent preferences, which don’t react to rest of the universe. It might adjust its strategies based on changing circumstances, but its terminal values stay constant.
This is true in my model for the slave, but not for the master. Obviously real humans are much more complicated but I think the model captures some element of the truth here.