Let me give you an example from your own Evil Genie puzzle: There are only two possible worlds, the one where you pick rotten eggs, and the one where you have a perfect life. Additionally, in the one where you have the perfect life, there is a bunch of clones of you who are being tortured. The clones may hallucinate that they have the capability of deciding, but, by the stipulation in the problem, they are stuck with your heartless decision. So, depending on whether you care about the clones enough, you “decide” on one or the other. There are no counterfactuals needed.
Yes, I am so happy to see someone else mentioning Evil Genie! That said, it doesn’t quite work that way. They freely choose that option, it is just guaranteed to be the same choice as yours. “So, depending on whether you care about the clones enough”—well you don’t know if you are a clone or an individual.
They freely choose that option, it is just guaranteed to be the same choice as yours.
That is where we part ways. They think they choose freely, but they are hallucinating that. There is no world where this freedom is expressed. The same applies to the original, by the way. Consider two setups, the original and the one where you (the original), and your clones are told that they are clones before ostensibly making the choice. By the definition of the problem, the genie knows your decision in advance, and, since the clones have been created, it is to choose the perfect life. Hence, regardless of whether you are told that you are a clone, you will still “decide” to pick the perfect life.
The sooner you abandon the self-contradictory idea that you can make decisions freely in a world with perfect predictors, the sooner the confusion about counterfactuals will fade away.
My guess is that the thing you think is being hallucinated is not the thing your interlocutors refer to (in multiple recent conversations). You should make some sort of reference that has a chance of unpacking the intended meanings, giving the conversations more of a margin above going from the use of phrases like “fleely choose” to conviction about what others mean by that, and about what others understand you to mean by that.
I agree with that, but the inferential distance seems too large. When I explain what I mean (there is no such thing as making a decision changing the actual world, except in the mind of an observer), people tend to put up a mental wall against it.
My point is that you seem to disagree in response to words said by others, which on further investigation turn out to have been referring to things you agree with. So the disagreable reaction to words themselves is too trigger-happy. Conversely, the words you choose to describe your own position (“there is no such thing as making a decision...”) are somewhat misleading, in the sense that their sloppy reading indicates something quite different from what you mean, or what should be possible to see when reading carefully (the quote in this sentence is an example, where the ellipsis omits the crucial detail, resulting in something silly). So the inferential distance seems mostly a matter of inefficient communication, not of distance between ideas themselves.
Just reread it. Seems we are very much on the same page. What you call timeless counterfactuals I call possible worlds. What you call point counterfactuals are indeed just mental errors, models that do not correspond to any possible world. In fact, my post makes many of the same points.
Counterfactuals are about the state of mind of the observer (commonly known as the agent), and thus are no more special than any other expected utility calculation technique. When do you think counterfactuals are important?
“Counterfactuals are about the state of mind of the observer”—I agree. But my question was why you don’t think that they have anything to do with decisions?
When do you think counterfactuals are important?
When choosing the best counterfactual gives us the best outcome.
Ah. I don’t quite understand the “different past” thing, at least not when the past is already known. One can say that imagining a different past can be useful for making better decisions in the future, but then you are imagining a different future in a similar (but not identical in terms of a mictrostate) setup, not a different past.
No, it cannot. What you are doing in a self-consistent model is something else. As jessicata and I discussed elsewhere on this site, What we observe is a macrostate, and there are many microstates corresponding to the same macrostate. The “different past” means a state of the world in a different microstate than in the past, while in the same macrostate as in the past. So there is no such thing as a counterfactual. the “would have been” means a different microstate. In that sense it is no different from the state observed in present or in the future.
Why don’t you think counterfactuals have anything to do with decisions?
Let me give you an example from your own Evil Genie puzzle: There are only two possible worlds, the one where you pick rotten eggs, and the one where you have a perfect life. Additionally, in the one where you have the perfect life, there is a bunch of clones of you who are being tortured. The clones may hallucinate that they have the capability of deciding, but, by the stipulation in the problem, they are stuck with your heartless decision. So, depending on whether you care about the clones enough, you “decide” on one or the other. There are no counterfactuals needed.
Yes, I am so happy to see someone else mentioning Evil Genie! That said, it doesn’t quite work that way. They freely choose that option, it is just guaranteed to be the same choice as yours. “So, depending on whether you care about the clones enough”—well you don’t know if you are a clone or an individual.
That is where we part ways. They think they choose freely, but they are hallucinating that. There is no world where this freedom is expressed. The same applies to the original, by the way. Consider two setups, the original and the one where you (the original), and your clones are told that they are clones before ostensibly making the choice. By the definition of the problem, the genie knows your decision in advance, and, since the clones have been created, it is to choose the perfect life. Hence, regardless of whether you are told that you are a clone, you will still “decide” to pick the perfect life.
The sooner you abandon the self-contradictory idea that you can make decisions freely in a world with perfect predictors, the sooner the confusion about counterfactuals will fade away.
I wasn’t claiming the existence of libertarian free will. Just that the clone’s decision is no less free than yours.
My guess is that the thing you think is being hallucinated is not the thing your interlocutors refer to (in multiple recent conversations). You should make some sort of reference that has a chance of unpacking the intended meanings, giving the conversations more of a margin above going from the use of phrases like “fleely choose” to conviction about what others mean by that, and about what others understand you to mean by that.
I agree with that, but the inferential distance seems too large. When I explain what I mean (there is no such thing as making a decision changing the actual world, except in the mind of an observer), people tend to put up a mental wall against it.
My point is that you seem to disagree in response to words said by others, which on further investigation turn out to have been referring to things you agree with. So the disagreable reaction to words themselves is too trigger-happy. Conversely, the words you choose to describe your own position (“there is no such thing as making a decision...”) are somewhat misleading, in the sense that their sloppy reading indicates something quite different from what you mean, or what should be possible to see when reading carefully (the quote in this sentence is an example, where the ellipsis omits the crucial detail, resulting in something silly). So the inferential distance seems mostly a matter of inefficient communication, not of distance between ideas themselves.
Thanks, it’s a good point! I appreciate the feedback.
For the record, I actually agree that: “there is no such thing as making a decision changing the actual world, except in the mind of an observer” and made a similar argument here: https://www.lesswrong.com/posts/YpdTSt4kRnuSkn63c/the-prediction-problem-a-variant-on-newcomb-s
Just reread it. Seems we are very much on the same page. What you call timeless counterfactuals I call possible worlds. What you call point counterfactuals are indeed just mental errors, models that do not correspond to any possible world. In fact, my post makes many of the same points.
Counterfactuals are about the state of mind of the observer (commonly known as the agent), and thus are no more special than any other expected utility calculation technique. When do you think counterfactuals are important?
“Counterfactuals are about the state of mind of the observer”—I agree. But my question was why you don’t think that they have anything to do with decisions?
When choosing the best counterfactual gives us the best outcome.
Maybe we have different ideas about what counterfactuals are. What is your best reference for this term as people here use it?
An imaginary world representing an alternative of what “could have happened”
Ah, so about a different imaginable past? Not about a different possible future?
A different imaginable timeline. So past, present and future
Ah. I don’t quite understand the “different past” thing, at least not when the past is already known. One can say that imagining a different past can be useful for making better decisions in the future, but then you are imagining a different future in a similar (but not identical in terms of a mictrostate) setup, not a different past.
The past can’t be different, but the “past” in a model can be.
No, it cannot. What you are doing in a self-consistent model is something else. As jessicata and I discussed elsewhere on this site, What we observe is a macrostate, and there are many microstates corresponding to the same macrostate. The “different past” means a state of the world in a different microstate than in the past, while in the same macrostate as in the past. So there is no such thing as a counterfactual. the “would have been” means a different microstate. In that sense it is no different from the state observed in present or in the future.