These assumptions … are enough to specify uniquely how utility should work across copying, deleting, and merging.
I’m not sure they are.
They are—in that given the assumptions, you have that result. Now, the result might not be nice or ideal, but that means that the assumptions are wrong.
Now, you are pointing out that my solution is counter-intuitive. I agree completely; my intuition was in the previous post, and it all went wrong. I feel these axioms are like those of expected utility—idealisations that you would want an AI to follow, that you might want to approximate, but that humans can’t follow.
But there is a mistake that a lot of utilitarians make, and that is to insist that their utility function must be simple. Not so; there is no reason to require that. Here, I’m sure we could deal with these problems in many different ways.
The blind woman example is the hardest. By assumption, the others will have to feel the fuzzies for a copy of them helping her cross the street, or a non-indexical happiness from Mrs. Atkins being helped across the street, or similar. But the others can easily be dealt with… The simplest way for obligations and expectations is to say that all copies have all the obligations and expectations incurred by each of their members. Legally at least, this seems perfectly fine.
As for the property, there are many solutions, and one extreme one is this: only the utility of the copy that has the tie/house/relationship actually matters. As I said, I am only forbidding intrinsic differences between copies; a setup that says “you must serve the copy of you that has the blue and gold tie” is perfectly possible. Though stupid. But most intuitive ways of doing things that you can come up with can be captured by the utility function, especially if you can make it non-indexical.
They are—in that given the assumptions, you have that result. Now, the result might not be nice or ideal, but that means that the assumptions are wrong.
Now, you are pointing out that my solution is counter-intuitive. I agree completely; my intuition was in the previous post, and it all went wrong. I feel these axioms are like those of expected utility—idealisations that you would want an AI to follow, that you might want to approximate, but that humans can’t follow.
But there is a mistake that a lot of utilitarians make, and that is to insist that their utility function must be simple. Not so; there is no reason to require that. Here, I’m sure we could deal with these problems in many different ways.
The blind woman example is the hardest. By assumption, the others will have to feel the fuzzies for a copy of them helping her cross the street, or a non-indexical happiness from Mrs. Atkins being helped across the street, or similar. But the others can easily be dealt with… The simplest way for obligations and expectations is to say that all copies have all the obligations and expectations incurred by each of their members. Legally at least, this seems perfectly fine.
As for the property, there are many solutions, and one extreme one is this: only the utility of the copy that has the tie/house/relationship actually matters. As I said, I am only forbidding intrinsic differences between copies; a setup that says “you must serve the copy of you that has the blue and gold tie” is perfectly possible. Though stupid. But most intuitive ways of doing things that you can come up with can be captured by the utility function, especially if you can make it non-indexical.