I think a variation of my approach to resolving the betting argument for SB can also help deal with the very large universe problem. I’ve taken a look at the following setup:
There are N Experimenters scattered throughout the universe, where N is very, very large. Each Experimenter tries to determine which of two hypotheses A and B about the universe are correct by running some experiment and collection some data. Let d be the data collected, and let y be the remaining information (experiences, memories) that could distinguish this Experimenter from others.
It is possible to choose N so large that the prior probability approaches one that there will be some Experimenter with that particular d and y , regardless of whether A or B is true. This means that the Experimenter’s posterior probability for A versus B will update only slightly from its prior probability.
And yet if the Experimenter has to make a choice based on whether A or B is true, and we weight the payoffs according to how many Experimenters there are with the same y and d (as done in my analysis for SB), then the maximum-expected-utility answer does not depend onN: from the standpoint of decision-making, we can ignore the possibility of all those other Experimenters and just assume N=1.
Interesting. I guess for this to work, one has to have what one might call a non-indexical morality—one that might favour people very, very much like you over others, but that doesn’t favour YOU (whatever that means) over other nearly-identical people. (i”m going for “nearly-identical” over “identical”, since I’m not sure what it means for there to be several people who are identical.) It seems odd that morality should have anything to do with probability, but maybe it does....
I think a variation of my approach to resolving the betting argument for SB can also help deal with the very large universe problem. I’ve taken a look at the following setup:
There are N Experimenters scattered throughout the universe, where N is very, very large. Each Experimenter tries to determine which of two hypotheses A and B about the universe are correct by running some experiment and collection some data. Let d be the data collected, and let y be the remaining information (experiences, memories) that could distinguish this Experimenter from others.
It is possible to choose N so large that the prior probability approaches one that there will be some Experimenter with that particular d and y , regardless of whether A or B is true. This means that the Experimenter’s posterior probability for A versus B will update only slightly from its prior probability.
And yet if the Experimenter has to make a choice based on whether A or B is true, and we weight the payoffs according to how many Experimenters there are with the same y and d (as done in my analysis for SB), then the maximum-expected-utility answer does not depend on N: from the standpoint of decision-making, we can ignore the possibility of all those other Experimenters and just assume N=1.
Interesting. I guess for this to work, one has to have what one might call a non-indexical morality—one that might favour people very, very much like you over others, but that doesn’t favour YOU (whatever that means) over other nearly-identical people. (i”m going for “nearly-identical” over “identical”, since I’m not sure what it means for there to be several people who are identical.) It seems odd that morality should have anything to do with probability, but maybe it does....