Looks like our views are not that far apart :) Your approach is way more prescriptive than mine, though. Indeed any kind of counterfactuals (or factuals for that matter, there is not much difference) are in the observer’s model of the world.
There is simply no need to attempt to figure out logical counterfactuals given perfect knowledge of a situation.
Right, you enumerate the possible worlds and note which one gives best outcome. In your setup:
Situation 1 has two possible worlds, AA and BB, and the observer 1 who thinks they chose A ends up with higher utility.
Situation 2 has two possible worlds, AA and BA. if the observer 1 lives in the world where they “chose” A, they get higher utility.
Generalised Situation has the possible worlds (AA or BB) or (AA or BA), so AA, BA, BB, and, again, if the observer 1 lives in the world where they “chose” A, they end up with higher utility.
It is a mistake to focus too much on the world itself as given precisely what happened all (strict) counterfactuals are impossible. The only thing that is possible is what actually happened. This is why we need to focus on your state of knowledge instead.
I don’t know if this has been discussed enough, since people are prone to the mind projection fallacy, rather vainly thinking that their models correspond to factual or counterfactual worlds, in Eliezer’s worlds, branches of the MWI wave function, or Tegmark’s universes. And, as you said, “We could easily end up in all kinds of tangles trying to figure out the logical counterfactuals”
Counterfactuals are only mind projection if there is nothing in the world corresponding to them. There is a surreptitious ontological assumption there. It is hard to see how someone could come to correct conclusions about the nature of reality by thinking about decision theiry. It is easy to see how particular decision theories embed implicit assumptions about ontology
Looks like our views are not that far apart :) Your approach is way more prescriptive than mine, though. Indeed any kind of counterfactuals (or factuals for that matter, there is not much difference) are in the observer’s model of the world.
Right, you enumerate the possible worlds and note which one gives best outcome. In your setup:
Situation 1 has two possible worlds, AA and BB, and the observer 1 who thinks they chose A ends up with higher utility.
Situation 2 has two possible worlds, AA and BA. if the observer 1 lives in the world where they “chose” A, they get higher utility.
Generalised Situation has the possible worlds (AA or BB) or (AA or BA), so AA, BA, BB, and, again, if the observer 1 lives in the world where they “chose” A, they end up with higher utility.
I don’t know if this has been discussed enough, since people are prone to the mind projection fallacy, rather vainly thinking that their models correspond to factual or counterfactual worlds, in Eliezer’s worlds, branches of the MWI wave function, or Tegmark’s universes. And, as you said, “We could easily end up in all kinds of tangles trying to figure out the logical counterfactuals”
How is my approach my more prescriptive than yours? Also, what do you mean by “Eliezer’s worlds”?
(Ps. I asked about observer 2′s behaviour, not observer 1′s)
When you say something like “we need to focus on your state of knowledge instead” it is a prescription :)
Sorry if I renamed your observers, unless I misunderstood the whole setup, which is also possible.
Eliezer often writes, or used to write, something like “It may help to visualize a collection of worlds—Everett branches or Tegmark duplicates” when talking about counterfactuals.
Counterfactuals are only mind projection if there is nothing in the world corresponding to them. There is a surreptitious ontological assumption there. It is hard to see how someone could come to correct conclusions about the nature of reality by thinking about decision theiry. It is easy to see how particular decision theories embed implicit assumptions about ontology