By pretty much every objective measure, the people who accept the doomsday argument in my thought experiment do better than those who don’t. So I don’t think it takes any additional assumptions to conclude that even selfish people should say yes.
From what I can tell, a lot of your arguments seem to be applicable even outside anthropics. Consider the following experiment. An experimenter rolls a fair 100-sided die. Then they ask someone to guess if they rolled a number >5 or not, giving them some reward if they guess correctly. Then they reroll and ask a different person, and repeat this 100 times. Now suppose I was one of these 100 people. In this situation, I could use reasoning that seems very similar to yours to reject any kind of action based on probability:
I either get the reward or not as the die landed on a number >5 or not. Giving an answer based on expected value might maximize the total benefit in aggregate of the 100 people, but it doesn’t help me, because I can’t know if the die is showing >5 or not. It is correct to say if everyone makes decisions based on expected utility then they will have more reward combined. But I will only have more reward if the die is >5, and this was already determined at the time of my decision, so there is no fact of the matter about what the best decision is.
And granted, it’s true, you can’t be sure what the die is showing in my experiment, or which copy you are in anthropic problems. But the whole point of probability is reasoning when you’re not sure, so that’s not a good reason to reject probabilistic reasoning in either of those situations.
For the non-anthropic problem, why take the detour of asking a different person each toss? You can personally take it 100 times, and since it’s a fair die, it would be around 95 times that it lands >5. Obviously guessing yes is the best strategy for maximizing your personal interest. There is no assuming the I” as a random sample, or making forced transcodings.
Let me construct a repeatable anthropic problem. Suppose tonight during your sleep you will be accurately cloned with memory preserved. Waking up the next morning, you may find yourself to be the original or one of the newly created clones. Let’s label the original No.1 and the 99 new clones No,2 to No 100 by the chronological order of their creation. Doesn’t matter if you are old or new you can repeat this experiment. Say you take the experiment repeatedly: wake up and fall asleep and let the cloning happen each time. Everyday you wake up, you will find your own number. You do this 100 times, would you say you ought to find your number >5 about 95 times?
My argument says there is no way to say that. Doing so would require assumptions to the effect of your soul having an equal chance of embodying each physical copy, i.e. “I” am a random sample among the group.
For the non-anthropic problem, you can use the 100-people version as a justification. Because among those people the die tosser choosing you to answer a question is an actual sampling process. It is reasonable to think in this process you are treated the same way as everyone. E.g. the experiment didn’t specifically sample you only for a certain number. But there is no sampling process determining which person you are in the anthropic version. Let alone assume the process is treating you indifferently among all souls or treating each physical body indifferently in your embodiment process.
Also, people believing the Doomsday Argument objectively perform better as a group in your thought experiment is not a particularly strong case. Thirders have also constructed many thought experiments where supporters of the Doomsday Argument (halfers) would objectively perform worse as a group. But that is not my argument. I’m saying the collective performance of a group one belongs to is not a direct substitute for self-interest.
You do this 100 times, would you say you ought to find your number >5 about 95 times?
I actually agree with you that there is no single answer to the question of “what you ought to anticipate”! Where I disagree is that I don’t think this means that there is no best way to make a decision. In your thought experiment, if you get a reward for guessing if your number is >5 correctly, then you should guess that your number is >5 every time.
My justification for this is that objectively, those who make decisions this way will tend to have more reward and outcompete those who don’t. This seems to me to be as close as we can get to defining the notion of “doing better when faced with uncertainty”, regardless of if it involves the “I” or not, and regardless of if you are selfish or not.
Edit to add more (and clarify one previous sentence):
Even in the case where you repeat the die-roll experiment 100 times, there is a chance that you’ll lose every time, it’s just a smaller chance. So even in that case it’s only true that the strategy maximizes your personal interest “in aggregate”.
I am also neither a “halfer” nor a “thirder”. Whether you should act like a halfer or a thirder depends on how reward is allocated, as explained in the post I originally linked to.
if you get a reward for guessing if your number is >5 correctly, then you should guess that your number is >5 every time.
I am a little unsure about your meaning here. Say you get a reward for guessing if your number is <5 correctly, then would you also guess your number is <5 each time?
I’m guessing that is not what you mean, but instead, you are thinking as the experiment is repeated more and more the relative frequency of you finding your own number >5 would approach 95%. What I am saying is this belief requires an assumption about treating the “I” as a random sample. Whereas for the non-anthropic problem, it doesn’t.
For me this is where the symmetry with doomsday argument breaks. Because here the result of the die roll is actually randomly selected from a distribution from 1 to 100.
While with doomsday argument it’s not the case. I’m not selected among all the humans throughout the time to be instantiated in 21st century. That’s not how causal process that produced me works. Actually, that’s not how causality itself works. Future humans causally depend on the past humans it’s not an independant random variable at all.
I agree that they are not symmetrical. My point with that thought experiment was to counter one of their arguments, which as I understand it can be paraphrased to:
In your thought experiment, the people who bet that they are in the last 95% of humans only win in aggregate, so there is still no selfish reason to think that taking that bet is the best decision for an individual.
My thought experiment with the dice was meant to show that this reasoning also applies to regular expected utility maximization, so if they use that argument to dismiss all anthropic reasoning, then they have to reject basically all probabilistic decision making. Presumably they will not reject all probabilistic reasoning, and therefore they have to reject this argument. (Assuming that I’ve correctly understood their argument and the logic I’ve just laid out holds.)
How does the logic here work if you change the question to be about human history?
Guessing a 50⁄50 coin flip is obviously impossible, but if Omega asks whether you are in the last 50% of “human history” the doomsday argument (not that I subscribe to it) is more compelling. The key point of the doomsday argument is that humanity’s growth is exponential, therefore if we’re the median birth-rank human and we continue to grow, we don’t actually have that long (in wall-time) to live.
By pretty much every objective measure, the people who accept the doomsday argument in my thought experiment do better than those who don’t. So I don’t think it takes any additional assumptions to conclude that even selfish people should say yes.
From what I can tell, a lot of your arguments seem to be applicable even outside anthropics. Consider the following experiment. An experimenter rolls a fair 100-sided die. Then they ask someone to guess if they rolled a number >5 or not, giving them some reward if they guess correctly. Then they reroll and ask a different person, and repeat this 100 times. Now suppose I was one of these 100 people. In this situation, I could use reasoning that seems very similar to yours to reject any kind of action based on probability:
And granted, it’s true, you can’t be sure what the die is showing in my experiment, or which copy you are in anthropic problems. But the whole point of probability is reasoning when you’re not sure, so that’s not a good reason to reject probabilistic reasoning in either of those situations.
For the non-anthropic problem, why take the detour of asking a different person each toss? You can personally take it 100 times, and since it’s a fair die, it would be around 95 times that it lands >5. Obviously guessing yes is the best strategy for maximizing your personal interest. There is no assuming the I” as a random sample, or making forced transcodings.
Let me construct a repeatable anthropic problem. Suppose tonight during your sleep you will be accurately cloned with memory preserved. Waking up the next morning, you may find yourself to be the original or one of the newly created clones. Let’s label the original No.1 and the 99 new clones No,2 to No 100 by the chronological order of their creation. Doesn’t matter if you are old or new you can repeat this experiment. Say you take the experiment repeatedly: wake up and fall asleep and let the cloning happen each time. Everyday you wake up, you will find your own number. You do this 100 times, would you say you ought to find your number >5 about 95 times?
My argument says there is no way to say that. Doing so would require assumptions to the effect of your soul having an equal chance of embodying each physical copy, i.e. “I” am a random sample among the group.
For the non-anthropic problem, you can use the 100-people version as a justification. Because among those people the die tosser choosing you to answer a question is an actual sampling process. It is reasonable to think in this process you are treated the same way as everyone. E.g. the experiment didn’t specifically sample you only for a certain number. But there is no sampling process determining which person you are in the anthropic version. Let alone assume the process is treating you indifferently among all souls or treating each physical body indifferently in your embodiment process.
Also, people believing the Doomsday Argument objectively perform better as a group in your thought experiment is not a particularly strong case. Thirders have also constructed many thought experiments where supporters of the Doomsday Argument (halfers) would objectively perform worse as a group. But that is not my argument. I’m saying the collective performance of a group one belongs to is not a direct substitute for self-interest.
I actually agree with you that there is no single answer to the question of “what you ought to anticipate”! Where I disagree is that I don’t think this means that there is no best way to make a decision. In your thought experiment, if you get a reward for guessing if your number is >5 correctly, then you should guess that your number is >5 every time.
My justification for this is that objectively, those who make decisions this way will tend to have more reward and outcompete those who don’t. This seems to me to be as close as we can get to defining the notion of “doing better when faced with uncertainty”, regardless of if it involves the “I” or not, and regardless of if you are selfish or not.
Edit to add more (and clarify one previous sentence):
Even in the case where you repeat the die-roll experiment 100 times, there is a chance that you’ll lose every time, it’s just a smaller chance. So even in that case it’s only true that the strategy maximizes your personal interest “in aggregate”.
I am also neither a “halfer” nor a “thirder”. Whether you should act like a halfer or a thirder depends on how reward is allocated, as explained in the post I originally linked to.
I am a little unsure about your meaning here. Say you get a reward for guessing if your number is <5 correctly, then would you also guess your number is <5 each time?
I’m guessing that is not what you mean, but instead, you are thinking as the experiment is repeated more and more the relative frequency of you finding your own number >5 would approach 95%. What I am saying is this belief requires an assumption about treating the “I” as a random sample. Whereas for the non-anthropic problem, it doesn’t.
For me this is where the symmetry with doomsday argument breaks. Because here the result of the die roll is actually randomly selected from a distribution from 1 to 100.
While with doomsday argument it’s not the case. I’m not selected among all the humans throughout the time to be instantiated in 21st century. That’s not how causal process that produced me works. Actually, that’s not how causality itself works. Future humans causally depend on the past humans it’s not an independant random variable at all.
I agree that they are not symmetrical. My point with that thought experiment was to counter one of their arguments, which as I understand it can be paraphrased to:
My thought experiment with the dice was meant to show that this reasoning also applies to regular expected utility maximization, so if they use that argument to dismiss all anthropic reasoning, then they have to reject basically all probabilistic decision making. Presumably they will not reject all probabilistic reasoning, and therefore they have to reject this argument. (Assuming that I’ve correctly understood their argument and the logic I’ve just laid out holds.)
Edit: Minor changes to improve clarity.
How does the logic here work if you change the question to be about human history?
Guessing a 50⁄50 coin flip is obviously impossible, but if Omega asks whether you are in the last 50% of “human history” the doomsday argument (not that I subscribe to it) is more compelling. The key point of the doomsday argument is that humanity’s growth is exponential, therefore if we’re the median birth-rank human and we continue to grow, we don’t actually have that long (in wall-time) to live.