I feel like the part where you “exclude worlds where ‘you don’t exist’ ” should probably amount to “exclude worlds where your current decision doesn’t have any effects”—it’s not clear in what sense you “don’t exist” if you are perfectly correlated with something in the world. And of course renormalizing makes no difference, it’s just expressing the fact that both sides of the bet get scaled down. So if that’s your operationalization, then it’s also just a description of something that automatically happens inside of the utility calculation.
(I do think it’s unclear whether selfish agents “should” be updateless in transparent newcomb.)
Yes, with that operationalisation, the update has no impact on actions. (Which makes it even more clear that the parsimonious choice is to skip it.)
(I do think it’s unclear whether selfish agents “should” be updateless in transparent newcomb.)
Yeah. It might be clearer to think about this as a 2-by-2 grid, with “Would you help a recent copy of yourself that has had one divergent experience from you?” on one axis and “Would you help a version of yourself that would naively be seen as non-existant?” (e.g. in transparent newcombs) on another.
It seems fairly clear that it’s reasonable to answer “yes” to both of these.
It’s possible that a selfish agent could sensibly answer “no” to both of them.
But perhaps we can exclude the other options.
Answering “yes” to the former and “no” to the latter would correspond to only caring about copies of yourself that ‘exist’ in the naive sense. (This is what the version of EDT+SSA that I wrote about it in my top-level comment would do.) Perhaps this could be excluded as relying on philosophical confusion about ‘existence’.
Answer “no” to the former and “yes” to the latter might correspond to something like… only caring about versions of yourself that you have some particular kind of (counterfactual) continuity or connection with. (I’m making stuff up here.) Anyway, maybe this could be excluded as necessarily having to rely on some confusions about personal identity.
I feel like the part where you “exclude worlds where ‘you don’t exist’ ” should probably amount to “exclude worlds where your current decision doesn’t have any effects”—it’s not clear in what sense you “don’t exist” if you are perfectly correlated with something in the world. And of course renormalizing makes no difference, it’s just expressing the fact that both sides of the bet get scaled down. So if that’s your operationalization, then it’s also just a description of something that automatically happens inside of the utility calculation.
(I do think it’s unclear whether selfish agents “should” be updateless in transparent newcomb.)
Yes, with that operationalisation, the update has no impact on actions. (Which makes it even more clear that the parsimonious choice is to skip it.)
Yeah. It might be clearer to think about this as a 2-by-2 grid, with “Would you help a recent copy of yourself that has had one divergent experience from you?” on one axis and “Would you help a version of yourself that would naively be seen as non-existant?” (e.g. in transparent newcombs) on another.
It seems fairly clear that it’s reasonable to answer “yes” to both of these.
It’s possible that a selfish agent could sensibly answer “no” to both of them.
But perhaps we can exclude the other options.
Answering “yes” to the former and “no” to the latter would correspond to only caring about copies of yourself that ‘exist’ in the naive sense. (This is what the version of EDT+SSA that I wrote about it in my top-level comment would do.) Perhaps this could be excluded as relying on philosophical confusion about ‘existence’.
Answer “no” to the former and “yes” to the latter might correspond to something like… only caring about versions of yourself that you have some particular kind of (counterfactual) continuity or connection with. (I’m making stuff up here.) Anyway, maybe this could be excluded as necessarily having to rely on some confusions about personal identity.