pdf23ds, under certain straightforward physical assumptions, 3^^^^3 people wouldn’t even fit in anyone’s future light-cone, in which case the probability is literally zero. So the assumption that our apparent physics is the physics of the real world too, really could serve to decide this question. The only problem is that that assumption itself is not very reasonable.
Lacking for the moment a rational way to delimit the range of possible worlds, one can utilize what I’ll call a Chalmers prior, which simply specifies directly how much time you will spend thinking about matrix scenarios. (I name it for David Chalmers because I once heard him give an estimate of the odds that we are in a matrix; I think it was about 10%.) The rationality of having a Chalmers prior can be justified by observing one’s own cognitive resource-boundedness, and the apparently endless amount of time one could spend thinking about matrix scenarios. (Is there a name for this sort of scheduling meta-heuristic, in which one limits the processing time available for potentially nonterminating lines of thought?)
I’m not aware of any (and I’m not sure it really solves this problem in particular), but there should be, because processing time is absolutely critical to bounded rationality.
pdf23ds, under certain straightforward physical assumptions, 3^^^^3 people wouldn’t even fit in anyone’s future light-cone, in which case the probability is literally zero. So the assumption that our apparent physics is the physics of the real world too, really could serve to decide this question. The only problem is that that assumption itself is not very reasonable.
Lacking for the moment a rational way to delimit the range of possible worlds, one can utilize what I’ll call a Chalmers prior, which simply specifies directly how much time you will spend thinking about matrix scenarios. (I name it for David Chalmers because I once heard him give an estimate of the odds that we are in a matrix; I think it was about 10%.) The rationality of having a Chalmers prior can be justified by observing one’s own cognitive resource-boundedness, and the apparently endless amount of time one could spend thinking about matrix scenarios. (Is there a name for this sort of scheduling meta-heuristic, in which one limits the processing time available for potentially nonterminating lines of thought?)
I’m not aware of any (and I’m not sure it really solves this problem in particular), but there should be, because processing time is absolutely critical to bounded rationality.