What if, instead of a paperclip-maximizer, we had a machine that was designed to maximize the amount of machine-pleasure in the world, where machine-pleasure is “the firing of a certain reward circuit in a system that is sufficiently similar to myself.”
Then it seems yttrium has a point: it is all going to come down to when the machine decides there are two systems in the world, and when it decides there is only one. And there is no “obvious” choice for the machine to make in this regard.
Edit: And so, if we want to make an AI that maximizes (among other things) certain subjective human experiences, we will have to make sure it doesn’t come to some sort of crazy conclusion about what that entails.
I’ve followed you, Manfred, in framing this question in terms of values and right actions. But the original question was framed in terms of expectations and future experiences. Do you think that the original question doesn’t make sense, or do you have something to say about the original formulation as well? I myself am on the fence.
If you make a robot that explicitly cares about things differently depending on how heavy it is, then sure, it can take actions as if it cared about things more when it’s heavier.
But that is done using the same probabilities as normal, merely a different utility function. Changing your utilities without changing your probabilities has no impact on the “probability of being a mind.”
We don’t have to program the machine to explicitly care about things differently depending on how heavy they are. Instead, we program the machine to care simply about how many systems exist—but wait! It turns out it we don’t know what we mean by that! According to yttrium.
What if, instead of a paperclip-maximizer, we had a machine that was designed to maximize the amount of machine-pleasure in the world, where machine-pleasure is “the firing of a certain reward circuit in a system that is sufficiently similar to myself.”
Then it seems yttrium has a point: it is all going to come down to when the machine decides there are two systems in the world, and when it decides there is only one. And there is no “obvious” choice for the machine to make in this regard.
Edit: And so, if we want to make an AI that maximizes (among other things) certain subjective human experiences, we will have to make sure it doesn’t come to some sort of crazy conclusion about what that entails.
I’ve followed you, Manfred, in framing this question in terms of values and right actions. But the original question was framed in terms of expectations and future experiences. Do you think that the original question doesn’t make sense, or do you have something to say about the original formulation as well? I myself am on the fence.
If you make a robot that explicitly cares about things differently depending on how heavy it is, then sure, it can take actions as if it cared about things more when it’s heavier.
But that is done using the same probabilities as normal, merely a different utility function. Changing your utilities without changing your probabilities has no impact on the “probability of being a mind.”
We don’t have to program the machine to explicitly care about things differently depending on how heavy they are. Instead, we program the machine to care simply about how many systems exist—but wait! It turns out it we don’t know what we mean by that! According to yttrium.