More or less correct. You don’t have to lie to yourself, but the benefit to you is entirely indirect. You’ll never causally help yourself, but you (that is, the person making the decision) are better off as a logical consequence of your decision procedure having a particular output.
Anyhow, applying this same reasoning, you should choose (assuming the evil genie isn’t messing up somewhere) getting pelted with rotten eggs, from the perspective of a completely selfish agent acting in the moment.
One distinction is that immediately after selecting rotten eggs, you regret the decision as you know you are the real version, but immediately after choosing “don’t create” you are indifferent. I’m still not quite confident about how these situations work.
Just like you don’t have to lie to yourself to understand that you’re making a choice that will benefit you logically but not causally, you also don’t have to regret it when you make a choice that is causally bad but indirectly good. You knew exactly what the two options were when you were weighing the choices—what is there to regret? I’d just regret ever rubbing the lamp in the first place.
Maybe another part of the difference between our intuitions is that I don’t think of the clones case as “one real you and a million impostors,” I think of it as “one million and one real yous.”
I discuss semi-clones (above) - if you insist that any individual cares about clones, perhaps you’d be persuaded that they mightn’t care about semi-clones?
“You knew exactly what the two options were when you were weighing the choices”—ah, but it was only after your choice was finalised that you knew whether there was a single individual or clones and that affects the reference class that you’re optimising.
I think you’re mixing up my claim about states of knowledge with a claim about caring, which I am not making. You can care only about yourself and not care about any copies of you, and still have a state of knowledge in which you really accept the possibility that your decision is controlling which person you are more likely to be. This can often lead to the same decisions as if you had precommitted based on caring about all future copies equally, but I’m not talking about that decision procedure.
ah, but it was only after your choice was finalised that you knew whether there was a single individual or clones and that affects the reference class that you’re optimising.
Yes, this is exactly the same as the cases I discuss in the linked post, which I still basically endorse. You might also think about Bayesian Probabilities Are For Things That Are Space-like Separated From You: there is a difference in how we have to treat knowledge of outside events, and decisions about what action we should take. There is a very important sense in which thinking about when you “know” which action you will take is trying to think about it in the wrong framework.
More or less correct. You don’t have to lie to yourself, but the benefit to you is entirely indirect. You’ll never causally help yourself, but you (that is, the person making the decision) are better off as a logical consequence of your decision procedure having a particular output.
Anyhow, applying this same reasoning, you should choose (assuming the evil genie isn’t messing up somewhere) getting pelted with rotten eggs, from the perspective of a completely selfish agent acting in the moment.
One distinction is that immediately after selecting rotten eggs, you regret the decision as you know you are the real version, but immediately after choosing “don’t create” you are indifferent. I’m still not quite confident about how these situations work.
Just like you don’t have to lie to yourself to understand that you’re making a choice that will benefit you logically but not causally, you also don’t have to regret it when you make a choice that is causally bad but indirectly good. You knew exactly what the two options were when you were weighing the choices—what is there to regret? I’d just regret ever rubbing the lamp in the first place.
Maybe another part of the difference between our intuitions is that I don’t think of the clones case as “one real you and a million impostors,” I think of it as “one million and one real yous.”
I discuss semi-clones (above) - if you insist that any individual cares about clones, perhaps you’d be persuaded that they mightn’t care about semi-clones?
“You knew exactly what the two options were when you were weighing the choices”—ah, but it was only after your choice was finalised that you knew whether there was a single individual or clones and that affects the reference class that you’re optimising.
I think you’re mixing up my claim about states of knowledge with a claim about caring, which I am not making. You can care only about yourself and not care about any copies of you, and still have a state of knowledge in which you really accept the possibility that your decision is controlling which person you are more likely to be. This can often lead to the same decisions as if you had precommitted based on caring about all future copies equally, but I’m not talking about that decision procedure.
Yes, this is exactly the same as the cases I discuss in the linked post, which I still basically endorse. You might also think about Bayesian Probabilities Are For Things That Are Space-like Separated From You: there is a difference in how we have to treat knowledge of outside events, and decisions about what action we should take. There is a very important sense in which thinking about when you “know” which action you will take is trying to think about it in the wrong framework.