Instead, I take the position that you can never conclude anything from your own existence except that you exist. That is, I eliminate all hypotheses that don’t predict my existence, and leave it at that, in accordance with SHA.
This is an idea that I had considered and rejected before settling on UDT.
And the presumptuous philosopher is an idiot because both theories are consistent with us existing, so again we get no relative update.
This is wrong. Recall that both T1 and T2 are theories with finite universes and finite numbers of observers. Also, T1 and T2 are not complete hypothesis which can generate predictions, but actually classes of hypotheses, because in order to generate predictions you need initial conditions in addition to a theory. Now if you take a random hypothesis in the T1 class (i.e., the theory T1 along with some random initial conditions), it’s much less likely to predict a universe that contains someone with your exact history of observations compared to a random hypothesis in the T2 class since each T2 universe contains many more observers than a T1 universe. In order words, “updating” on your observations by ruling out hypotheses that don’t predict the existence of someone with your observations would cause you to rule out a much greater fraction of T1 hypotheses than T2 hypotheses, thereby causing you to update heavily in the direction of the T2 theory being correct.
it’s much less likely to predict a universe that contains someone with your exact history of observations compared to a random hypothesis in the T2 class since each T2 universe contains many more observers than a T1 universe.
I wrote a post on how UDT deals with the Presumptuous Philosopher, but it’s been a while since I wrote that or last read it, so I can try explaining it again and hopefully offer something new.
UDT deals with decision problems, so let’s assume that in T1 and T2 universes, everyone is born an UDT-using adult and is immediately offered a bet on whether they are in T1 or T2, and then they’re offered the same bet again a while later after they’ve made some observations. We ask what initial odds they should demand, and whether they should change the odds after making those observations.
First it should be clear that it makes no sense to change the odds unless there is some way to condition the new odds on different observations (i.e., if some observations were relatively more likely in T1 than T2). If you can’t condition the new odds but change your odds anyway, then all other UDT agents do the same and you might as well choose those odds to begin with, before you made any observations.
What about the initial odds? That depends on your values. A bet on whether you’re in T1 or T2 is can be viewed as a transfer of wealth between T1 worlds and T2 worlds. Suppose everyone is offered a bet where you win $1 if you’re in T2, and lose $1 if you’re in T1. UDT would reason like this: if I accept the bet, then everyone in both T1 and T2 worlds accepts, so everyone in T1 worlds loses $1 and everyone in T2 worlds gains $1. Is this trade worth it? Suppose the total “measure” (or “reality-fluid”) I assign to T1 worlds and T2 worlds are equal and I’m an average utilitarian, then I’d be indifferent because I lose as much average utility in T1 worlds as I gain in T2 worlds. But if I’m a total utilitarian, then I’d accept the bet because there are many more people in a T2 world than in a T1 world and hence a lot more winners than losers.
So UDT can give you either SIA-like answers or non-SIA-like answers depending on your values. People seem to have both average-utilitarian-like intuitions and total-utilitarian-like intuitions, depending on what thought experiments you present to them (and who you ask), so according to UDT it’s not surprising that they would find SIA intuitive some times and not intuitive other times.
Your next question might be, what if I’m not a utilitarian of any sort, but have selfish values? Well, it’s actually not clear what “selfish values” means when talking about UDT agents, or what decision theory can handle selfish values better. I wrote a post about that as well.
Rather than “I am a person,” let″s substitute “I am painted green.”
Suppose we start out with ten people, none of them painted green.
A coin is flipped. If heads, one person is painted green. If tails, nine people are painted green.
If you observe that you have been painted green, what is your probability that the landed heads? Bayes rule time!
P(heads | green) = P(heads) * P(green | heads) / P(green) = 0.5 * 0.9 / 0.5 = 0.9. Observing that you have been painted green, you conclude that the coin is more likely to be heads. Simple Bayesian updating.
In this simple problem, upon learning that you have been painted green, you give equal weight to each green person, weighted by the prior probability of the coin.
This is an idea that I had considered and rejected before settling on UDT.
This is wrong. Recall that both T1 and T2 are theories with finite universes and finite numbers of observers. Also, T1 and T2 are not complete hypothesis which can generate predictions, but actually classes of hypotheses, because in order to generate predictions you need initial conditions in addition to a theory. Now if you take a random hypothesis in the T1 class (i.e., the theory T1 along with some random initial conditions), it’s much less likely to predict a universe that contains someone with your exact history of observations compared to a random hypothesis in the T2 class since each T2 universe contains many more observers than a T1 universe. In order words, “updating” on your observations by ruling out hypotheses that don’t predict the existence of someone with your observations would cause you to rule out a much greater fraction of T1 hypotheses than T2 hypotheses, thereby causing you to update heavily in the direction of the T2 theory being correct.
Whoops, you are right. I’ll think about that
How does UDT handle this, by the way?
I wrote a post on how UDT deals with the Presumptuous Philosopher, but it’s been a while since I wrote that or last read it, so I can try explaining it again and hopefully offer something new.
UDT deals with decision problems, so let’s assume that in T1 and T2 universes, everyone is born an UDT-using adult and is immediately offered a bet on whether they are in T1 or T2, and then they’re offered the same bet again a while later after they’ve made some observations. We ask what initial odds they should demand, and whether they should change the odds after making those observations.
First it should be clear that it makes no sense to change the odds unless there is some way to condition the new odds on different observations (i.e., if some observations were relatively more likely in T1 than T2). If you can’t condition the new odds but change your odds anyway, then all other UDT agents do the same and you might as well choose those odds to begin with, before you made any observations.
What about the initial odds? That depends on your values. A bet on whether you’re in T1 or T2 is can be viewed as a transfer of wealth between T1 worlds and T2 worlds. Suppose everyone is offered a bet where you win $1 if you’re in T2, and lose $1 if you’re in T1. UDT would reason like this: if I accept the bet, then everyone in both T1 and T2 worlds accepts, so everyone in T1 worlds loses $1 and everyone in T2 worlds gains $1. Is this trade worth it? Suppose the total “measure” (or “reality-fluid”) I assign to T1 worlds and T2 worlds are equal and I’m an average utilitarian, then I’d be indifferent because I lose as much average utility in T1 worlds as I gain in T2 worlds. But if I’m a total utilitarian, then I’d accept the bet because there are many more people in a T2 world than in a T1 world and hence a lot more winners than losers.
So UDT can give you either SIA-like answers or non-SIA-like answers depending on your values. People seem to have both average-utilitarian-like intuitions and total-utilitarian-like intuitions, depending on what thought experiments you present to them (and who you ask), so according to UDT it’s not surprising that they would find SIA intuitive some times and not intuitive other times.
Your next question might be, what if I’m not a utilitarian of any sort, but have selfish values? Well, it’s actually not clear what “selfish values” means when talking about UDT agents, or what decision theory can handle selfish values better. I wrote a post about that as well.
This is a good framing
I feel cheated. I guess it could be arbitrary like this, but I’ll have to think about it. Grumble grumble. I was hoping for a grand resolution.
I would argue that selfish values should look like a state of information like “I am a person, I like cookies, here is a bet about cookies.”
Could you elaborate on the implications of that statement? I’m not following what you’re trying to say.
Rather than “I am a person,” let″s substitute “I am painted green.”
Suppose we start out with ten people, none of them painted green.
A coin is flipped. If heads, one person is painted green. If tails, nine people are painted green.
If you observe that you have been painted green, what is your probability that the landed heads? Bayes rule time!
P(heads | green) = P(heads) * P(green | heads) / P(green) = 0.5 * 0.9 / 0.5 = 0.9. Observing that you have been painted green, you conclude that the coin is more likely to be heads. Simple Bayesian updating.
In this simple problem, upon learning that you have been painted green, you give equal weight to each green person, weighted by the prior probability of the coin.