I discuss semi-clones (above) - if you insist that any individual cares about clones, perhaps you’d be persuaded that they mightn’t care about semi-clones?
“You knew exactly what the two options were when you were weighing the choices”—ah, but it was only after your choice was finalised that you knew whether there was a single individual or clones and that affects the reference class that you’re optimising.
I think you’re mixing up my claim about states of knowledge with a claim about caring, which I am not making. You can care only about yourself and not care about any copies of you, and still have a state of knowledge in which you really accept the possibility that your decision is controlling which person you are more likely to be. This can often lead to the same decisions as if you had precommitted based on caring about all future copies equally, but I’m not talking about that decision procedure.
ah, but it was only after your choice was finalised that you knew whether there was a single individual or clones and that affects the reference class that you’re optimising.
Yes, this is exactly the same as the cases I discuss in the linked post, which I still basically endorse. You might also think about Bayesian Probabilities Are For Things That Are Space-like Separated From You: there is a difference in how we have to treat knowledge of outside events, and decisions about what action we should take. There is a very important sense in which thinking about when you “know” which action you will take is trying to think about it in the wrong framework.
I discuss semi-clones (above) - if you insist that any individual cares about clones, perhaps you’d be persuaded that they mightn’t care about semi-clones?
“You knew exactly what the two options were when you were weighing the choices”—ah, but it was only after your choice was finalised that you knew whether there was a single individual or clones and that affects the reference class that you’re optimising.
I think you’re mixing up my claim about states of knowledge with a claim about caring, which I am not making. You can care only about yourself and not care about any copies of you, and still have a state of knowledge in which you really accept the possibility that your decision is controlling which person you are more likely to be. This can often lead to the same decisions as if you had precommitted based on caring about all future copies equally, but I’m not talking about that decision procedure.
Yes, this is exactly the same as the cases I discuss in the linked post, which I still basically endorse. You might also think about Bayesian Probabilities Are For Things That Are Space-like Separated From You: there is a difference in how we have to treat knowledge of outside events, and decisions about what action we should take. There is a very important sense in which thinking about when you “know” which action you will take is trying to think about it in the wrong framework.