Given only the decisions, you can’t disentangle the probability from the utility function anyhow. You’d have to do something like ask nicely about the agent’s utility or probability, or calculate from first principles, to get the other. So I don’t feel like the situation is qualitatively different. If everything but the probabilities can be seen as a fixed property of the agent, the agent has some properties, and for each outcome it assigns some probabilities.
A simplification: SIA + individual impact = SSA + total impact
ie if I think that worlds with more copies are more likely (but these are independent of me), this gives the same behaviour that if I believe my decision affects those of my copies (but worlds with many copies are no more likely).
Given only the decisions, you can’t disentangle the probability from the utility function anyhow. You’d have to do something like ask nicely about the agent’s utility or probability, or calculate from first principles, to get the other. So I don’t feel like the situation is qualitatively different. If everything but the probabilities can be seen as a fixed property of the agent, the agent has some properties, and for each outcome it assigns some probabilities.
A simplification: SIA + individual impact = SSA + total impact
ie if I think that worlds with more copies are more likely (but these are independent of me), this gives the same behaviour that if I believe my decision affects those of my copies (but worlds with many copies are no more likely).