EDT+SSA-with-a-minimal-reference-class behaves like UDT in anthropic dilemmas where updatelessness doesn’t matter.
I think SSA with a minimal reference class is roughly equivalent to “notice that you exist; exclude all possible worlds where you don’t exist; renormalize”
In large worlds where your observations have sufficient randomness that observers of all kinds exists in all worlds, the SSA update step cannot exclude any world. You’re updateless by default. (This is the case in the 99% example above.)
In small or sufficiently deterministic worlds, the SSA update step can exclude some possible worlds.
In “normal” situations, the fact that it excludes worlds where you don’t exist doesn’t have any implications for your decisions — because your actions will normally not have any effects in worlds where you don’t exist.
But in situations like transparent newcombs, this means that you will now not care about non-existent copies of yourself.
Basically, EDT behaves fine without updating. Excluding worlds where you don’t exist is one kind of updating that you can do that doesn’t change your behavior in normal situations. Whether you do this or not will determine whether you act updateless in situations like transparent newcomb that happen in small or sufficiently deterministic worlds. (In large and sufficiently random worlds, you’ll act updateless regardless.)
Viewed like this, the SSA part of EDT+SSA looks unnecessary and strange. Especially since I think you do want to act updateless in situations like transparent newscomb.
I feel like the part where you “exclude worlds where ‘you don’t exist’ ” should probably amount to “exclude worlds where your current decision doesn’t have any effects”—it’s not clear in what sense you “don’t exist” if you are perfectly correlated with something in the world. And of course renormalizing makes no difference, it’s just expressing the fact that both sides of the bet get scaled down. So if that’s your operationalization, then it’s also just a description of something that automatically happens inside of the utility calculation.
(I do think it’s unclear whether selfish agents “should” be updateless in transparent newcomb.)
Yes, with that operationalisation, the update has no impact on actions. (Which makes it even more clear that the parsimonious choice is to skip it.)
(I do think it’s unclear whether selfish agents “should” be updateless in transparent newcomb.)
Yeah. It might be clearer to think about this as a 2-by-2 grid, with “Would you help a recent copy of yourself that has had one divergent experience from you?” on one axis and “Would you help a version of yourself that would naively be seen as non-existant?” (e.g. in transparent newcombs) on another.
It seems fairly clear that it’s reasonable to answer “yes” to both of these.
It’s possible that a selfish agent could sensibly answer “no” to both of them.
But perhaps we can exclude the other options.
Answering “yes” to the former and “no” to the latter would correspond to only caring about copies of yourself that ‘exist’ in the naive sense. (This is what the version of EDT+SSA that I wrote about it in my top-level comment would do.) Perhaps this could be excluded as relying on philosophical confusion about ‘existence’.
Answer “no” to the former and “yes” to the latter might correspond to something like… only caring about versions of yourself that you have some particular kind of (counterfactual) continuity or connection with. (I’m making stuff up here.) Anyway, maybe this could be excluded as necessarily having to rely on some confusions about personal identity.
Doesn’t “sufficient randomness in observations” just mean that you split the possible worlds further by conditional probability of observations given actual world-state? You can still eliminate the ones where the observers don’t observe what you observed.
For example “I observe that the calculator says NO” doesn’t let you eliminate worlds where the correct answer is YES, but it does let you eliminate all worlds where you observe that the calculator says YES. So “notice that you (an observer who sees NO) exist; exclude all possible worlds where you don’t exist (because observers in that world see YES); renormalize” still does some work.
Interesting! Here’s one way to look at this:
EDT+SSA-with-a-minimal-reference-class behaves like UDT in anthropic dilemmas where updatelessness doesn’t matter.
I think SSA with a minimal reference class is roughly equivalent to “notice that you exist; exclude all possible worlds where you don’t exist; renormalize”
In large worlds where your observations have sufficient randomness that observers of all kinds exists in all worlds, the SSA update step cannot exclude any world. You’re updateless by default. (This is the case in the 99% example above.)
In small or sufficiently deterministic worlds, the SSA update step can exclude some possible worlds.
In “normal” situations, the fact that it excludes worlds where you don’t exist doesn’t have any implications for your decisions — because your actions will normally not have any effects in worlds where you don’t exist.
But in situations like transparent newcombs, this means that you will now not care about non-existent copies of yourself.
Basically, EDT behaves fine without updating. Excluding worlds where you don’t exist is one kind of updating that you can do that doesn’t change your behavior in normal situations. Whether you do this or not will determine whether you act updateless in situations like transparent newcomb that happen in small or sufficiently deterministic worlds. (In large and sufficiently random worlds, you’ll act updateless regardless.)
Viewed like this, the SSA part of EDT+SSA looks unnecessary and strange. Especially since I think you do want to act updateless in situations like transparent newscomb.
I feel like the part where you “exclude worlds where ‘you don’t exist’ ” should probably amount to “exclude worlds where your current decision doesn’t have any effects”—it’s not clear in what sense you “don’t exist” if you are perfectly correlated with something in the world. And of course renormalizing makes no difference, it’s just expressing the fact that both sides of the bet get scaled down. So if that’s your operationalization, then it’s also just a description of something that automatically happens inside of the utility calculation.
(I do think it’s unclear whether selfish agents “should” be updateless in transparent newcomb.)
Yes, with that operationalisation, the update has no impact on actions. (Which makes it even more clear that the parsimonious choice is to skip it.)
Yeah. It might be clearer to think about this as a 2-by-2 grid, with “Would you help a recent copy of yourself that has had one divergent experience from you?” on one axis and “Would you help a version of yourself that would naively be seen as non-existant?” (e.g. in transparent newcombs) on another.
It seems fairly clear that it’s reasonable to answer “yes” to both of these.
It’s possible that a selfish agent could sensibly answer “no” to both of them.
But perhaps we can exclude the other options.
Answering “yes” to the former and “no” to the latter would correspond to only caring about copies of yourself that ‘exist’ in the naive sense. (This is what the version of EDT+SSA that I wrote about it in my top-level comment would do.) Perhaps this could be excluded as relying on philosophical confusion about ‘existence’.
Answer “no” to the former and “yes” to the latter might correspond to something like… only caring about versions of yourself that you have some particular kind of (counterfactual) continuity or connection with. (I’m making stuff up here.) Anyway, maybe this could be excluded as necessarily having to rely on some confusions about personal identity.
Doesn’t “sufficient randomness in observations” just mean that you split the possible worlds further by conditional probability of observations given actual world-state? You can still eliminate the ones where the observers don’t observe what you observed.
For example “I observe that the calculator says NO” doesn’t let you eliminate worlds where the correct answer is YES, but it does let you eliminate all worlds where you observe that the calculator says YES. So “notice that you (an observer who sees NO) exist; exclude all possible worlds where you don’t exist (because observers in that world see YES); renormalize” still does some work.