First of all, I appreciate you trying to make more clear why you believe that “today is Monday” is not a legitimate classical proposition, in particular by linking to the article in IEP. I have skimmed the article although I may have missed something important.
Anyway, it seems to me that the issues that concerned the philosophers discussed in that article are mostly not really about whether classical logic is valid for indexicals. Classical logic as I understand it is just a bunch of assertions about propositions, like “for every proposition P, either P is true or P is false” and “for every proposition P, the proposition “P implies P” is true”. The only place where I have noticed a questioning of such assertions is in the dicussion of the sentence (7). But to me this seems like it is really an analysis of a statement of the form “P implies Q”, and the question at issue is whether Q is really the same as P or not. In cases where P and Q are in fact the same, there is no doubt that “P implies Q” is a true statement.
In particular, in the Sleeping Beauty case, we can assume that she can complete any particular chain of reasoning in a reasonably short timeframe, in particular without crossing midnight. If this is the case, then it seems that all instances of “today is Monday” in her reasoning will in fact refer to the same proposition. Thus, I don’t see why there is a problem.
There would certainly be a problem if Sleeping Beauty used an argument which by necessity stretched out over several days, such that she was assuming “today is Monday” meant the same thing on both days. But the problems with such an argument seem so obvious that I cannot imagine a detailed explanation would be necessary to get anyone to see them.
In any case, here is one thing that seems odd about your/Neal’s theory. Suppose you have a universe in which there are necessarily Boltzmann brains giving every possible experience. However, we can assume that these brains represent an extremely small proportion of all brains. (Some people think that this is a description of our universe.) Then it seems that you can never update your probabilities based on evidence, because every evidence you see is of the form “there is a brain that has such-and-such sequence of experiences” which you already knew was a necessary truth. It looks like you can still get the decision theory to work out by taking into account the fact that you have more control over universes in which there are more brains giving the sequence of experiences, but it still seems that you are throwing out the entire concept of probability in this case.
In my paper, I discuss how Full Non-Indexical Conditioning seems to break down if the universe is so large that someone with identical memories as you has a non-negligible chance of existing elsewhere in the universe. Note that this requires a VERY large universe—the size of universe we can actually observe in telescopes isn’t enough.
I go on to argue that although I don’t know how to resolve this issue, I think it’s likely that it has no relevance when addressing non-cosmological problems such as Sleeping Beauty or the Doomsday Argument. Sleeping Beauty in particular is only mildly fantastical (memory erasure) and is otherwise a mundane issue of local behaviour in our part of the universe. I don’t see why its solution should depend on whether the universe is large, very large, very VERY large, or infinite. I expect that even if Full Non-Indexical Conditioning needs to be modified somehow to cope with really large universes, the modification will not change the result for Sleeping Beauty. It’s sort of how physicists in 1850 probably realized there were a few puzzles regarding light and Newtonian physics, but nevertheless thought, correctly, that the resolution of those puzzles wouldn’t change the answers to questions of when bridges will collapse.
I think a variation of my approach to resolving the betting argument for SB can also help deal with the very large universe problem. I’ve taken a look at the following setup:
There are N Experimenters scattered throughout the universe, where N is very, very large. Each Experimenter tries to determine which of two hypotheses A and B about the universe are correct by running some experiment and collection some data. Let d be the data collected, and let y be the remaining information (experiences, memories) that could distinguish this Experimenter from others.
It is possible to choose N so large that the prior probability approaches one that there will be some Experimenter with that particular d and y , regardless of whether A or B is true. This means that the Experimenter’s posterior probability for A versus B will update only slightly from its prior probability.
And yet if the Experimenter has to make a choice based on whether A or B is true, and we weight the payoffs according to how many Experimenters there are with the same y and d (as done in my analysis for SB), then the maximum-expected-utility answer does not depend onN: from the standpoint of decision-making, we can ignore the possibility of all those other Experimenters and just assume N=1.
Interesting. I guess for this to work, one has to have what one might call a non-indexical morality—one that might favour people very, very much like you over others, but that doesn’t favour YOU (whatever that means) over other nearly-identical people. (i”m going for “nearly-identical” over “identical”, since I’m not sure what it means for there to be several people who are identical.) It seems odd that morality should have anything to do with probability, but maybe it does....
Fair enough. I just thought it was a kind of weird thing for a theory to be sensitive to. I guess the theory is self-consistent although it’s not clear to me how well it matches with the intuitive concept of “probability”.
First of all, I appreciate you trying to make more clear why you believe that “today is Monday” is not a legitimate classical proposition, in particular by linking to the article in IEP. I have skimmed the article although I may have missed something important.
Anyway, it seems to me that the issues that concerned the philosophers discussed in that article are mostly not really about whether classical logic is valid for indexicals. Classical logic as I understand it is just a bunch of assertions about propositions, like “for every proposition P, either P is true or P is false” and “for every proposition P, the proposition “P implies P” is true”. The only place where I have noticed a questioning of such assertions is in the dicussion of the sentence (7). But to me this seems like it is really an analysis of a statement of the form “P implies Q”, and the question at issue is whether Q is really the same as P or not. In cases where P and Q are in fact the same, there is no doubt that “P implies Q” is a true statement.
In particular, in the Sleeping Beauty case, we can assume that she can complete any particular chain of reasoning in a reasonably short timeframe, in particular without crossing midnight. If this is the case, then it seems that all instances of “today is Monday” in her reasoning will in fact refer to the same proposition. Thus, I don’t see why there is a problem.
There would certainly be a problem if Sleeping Beauty used an argument which by necessity stretched out over several days, such that she was assuming “today is Monday” meant the same thing on both days. But the problems with such an argument seem so obvious that I cannot imagine a detailed explanation would be necessary to get anyone to see them.
In any case, here is one thing that seems odd about your/Neal’s theory. Suppose you have a universe in which there are necessarily Boltzmann brains giving every possible experience. However, we can assume that these brains represent an extremely small proportion of all brains. (Some people think that this is a description of our universe.) Then it seems that you can never update your probabilities based on evidence, because every evidence you see is of the form “there is a brain that has such-and-such sequence of experiences” which you already knew was a necessary truth. It looks like you can still get the decision theory to work out by taking into account the fact that you have more control over universes in which there are more brains giving the sequence of experiences, but it still seems that you are throwing out the entire concept of probability in this case.
In my paper, I discuss how Full Non-Indexical Conditioning seems to break down if the universe is so large that someone with identical memories as you has a non-negligible chance of existing elsewhere in the universe. Note that this requires a VERY large universe—the size of universe we can actually observe in telescopes isn’t enough.
I go on to argue that although I don’t know how to resolve this issue, I think it’s likely that it has no relevance when addressing non-cosmological problems such as Sleeping Beauty or the Doomsday Argument. Sleeping Beauty in particular is only mildly fantastical (memory erasure) and is otherwise a mundane issue of local behaviour in our part of the universe. I don’t see why its solution should depend on whether the universe is large, very large, very VERY large, or infinite. I expect that even if Full Non-Indexical Conditioning needs to be modified somehow to cope with really large universes, the modification will not change the result for Sleeping Beauty. It’s sort of how physicists in 1850 probably realized there were a few puzzles regarding light and Newtonian physics, but nevertheless thought, correctly, that the resolution of those puzzles wouldn’t change the answers to questions of when bridges will collapse.
I think a variation of my approach to resolving the betting argument for SB can also help deal with the very large universe problem. I’ve taken a look at the following setup:
There are N Experimenters scattered throughout the universe, where N is very, very large. Each Experimenter tries to determine which of two hypotheses A and B about the universe are correct by running some experiment and collection some data. Let d be the data collected, and let y be the remaining information (experiences, memories) that could distinguish this Experimenter from others.
It is possible to choose N so large that the prior probability approaches one that there will be some Experimenter with that particular d and y , regardless of whether A or B is true. This means that the Experimenter’s posterior probability for A versus B will update only slightly from its prior probability.
And yet if the Experimenter has to make a choice based on whether A or B is true, and we weight the payoffs according to how many Experimenters there are with the same y and d (as done in my analysis for SB), then the maximum-expected-utility answer does not depend on N: from the standpoint of decision-making, we can ignore the possibility of all those other Experimenters and just assume N=1.
Interesting. I guess for this to work, one has to have what one might call a non-indexical morality—one that might favour people very, very much like you over others, but that doesn’t favour YOU (whatever that means) over other nearly-identical people. (i”m going for “nearly-identical” over “identical”, since I’m not sure what it means for there to be several people who are identical.) It seems odd that morality should have anything to do with probability, but maybe it does....
Fair enough. I just thought it was a kind of weird thing for a theory to be sensitive to. I guess the theory is self-consistent although it’s not clear to me how well it matches with the intuitive concept of “probability”.