The problem is that copying and merging is not as harmless as it seems. You are basically doing invasive surgery on the mind, but because it’s performed using intuitively “non-invasive” operations, it looks harmless. If, for example, you replaced the procedure with rewriting “subjective probability” by directly modifying the brain, the fact that you’d have different “subjective probability” as a result won’t be surprising.
Thus, on one hand, there is an intuition that the described procedure doesn’t damage the brain, and on the other the intuition about what subjective probability should look like in an undamaged brain, no matter in what form this outcome is delivered (that is, probability is always the same, you can just learn about it in different ways, and this experiment is one of them). The problem is that the experiment is not an instance of normal experience to which one can generalize the rule that subjective probability works fine, but an instance of arbitrary modification of the brain, from which you can expect anything.
Assuming that the experiment with copying/merging doesn’t damage the brain, the resulting subjective probability must be correct, and so we get a perception of modifying the correct subjective probability arbitrarily.
Thought experiments with doing strange things to decision-theoretic agents are only valid if the agents have an idea about what kind of situation they are in, and so can try to find a good way out. Anything less, and it’s just phenomenology: throw a rat in magma and see how it burns. Human intuitions about subjective expectation are optimized for agents who don’t get copied or merged.
The problem is that copying and merging is not as harmless as it seems. You are basically doing invasive surgery on the mind, but because it’s performed using intuitively “non-invasive” operations, it looks harmless. If, for example, you replaced the procedure with rewriting “subjective probability” by directly modifying the brain, the fact that you’d have different “subjective probability” as a result won’t be surprising.
Thus, on one hand, there is an intuition that the described procedure doesn’t damage the brain, and on the other the intuition about what subjective probability should look like in an undamaged brain, no matter in what form this outcome is delivered (that is, probability is always the same, you can just learn about it in different ways, and this experiment is one of them). The problem is that the experiment is not an instance of normal experience to which one can generalize the rule that subjective probability works fine, but an instance of arbitrary modification of the brain, from which you can expect anything.
Assuming that the experiment with copying/merging doesn’t damage the brain, the resulting subjective probability must be correct, and so we get a perception of modifying the correct subjective probability arbitrarily.
Thought experiments with doing strange things to decision-theoretic agents are only valid if the agents have an idea about what kind of situation they are in, and so can try to find a good way out. Anything less, and it’s just phenomenology: throw a rat in magma and see how it burns. Human intuitions about subjective expectation are optimized for agents who don’t get copied or merged.