You do this 100 times, would you say you ought to find your number >5 about 95 times?
I actually agree with you that there is no single answer to the question of “what you ought to anticipate”! Where I disagree is that I don’t think this means that there is no best way to make a decision. In your thought experiment, if you get a reward for guessing if your number is >5 correctly, then you should guess that your number is >5 every time.
My justification for this is that objectively, those who make decisions this way will tend to have more reward and outcompete those who don’t. This seems to me to be as close as we can get to defining the notion of “doing better when faced with uncertainty”, regardless of if it involves the “I” or not, and regardless of if you are selfish or not.
Edit to add more (and clarify one previous sentence):
Even in the case where you repeat the die-roll experiment 100 times, there is a chance that you’ll lose every time, it’s just a smaller chance. So even in that case it’s only true that the strategy maximizes your personal interest “in aggregate”.
I am also neither a “halfer” nor a “thirder”. Whether you should act like a halfer or a thirder depends on how reward is allocated, as explained in the post I originally linked to.
if you get a reward for guessing if your number is >5 correctly, then you should guess that your number is >5 every time.
I am a little unsure about your meaning here. Say you get a reward for guessing if your number is <5 correctly, then would you also guess your number is <5 each time?
I’m guessing that is not what you mean, but instead, you are thinking as the experiment is repeated more and more the relative frequency of you finding your own number >5 would approach 95%. What I am saying is this belief requires an assumption about treating the “I” as a random sample. Whereas for the non-anthropic problem, it doesn’t.
I actually agree with you that there is no single answer to the question of “what you ought to anticipate”! Where I disagree is that I don’t think this means that there is no best way to make a decision. In your thought experiment, if you get a reward for guessing if your number is >5 correctly, then you should guess that your number is >5 every time.
My justification for this is that objectively, those who make decisions this way will tend to have more reward and outcompete those who don’t. This seems to me to be as close as we can get to defining the notion of “doing better when faced with uncertainty”, regardless of if it involves the “I” or not, and regardless of if you are selfish or not.
Edit to add more (and clarify one previous sentence):
Even in the case where you repeat the die-roll experiment 100 times, there is a chance that you’ll lose every time, it’s just a smaller chance. So even in that case it’s only true that the strategy maximizes your personal interest “in aggregate”.
I am also neither a “halfer” nor a “thirder”. Whether you should act like a halfer or a thirder depends on how reward is allocated, as explained in the post I originally linked to.
I am a little unsure about your meaning here. Say you get a reward for guessing if your number is <5 correctly, then would you also guess your number is <5 each time?
I’m guessing that is not what you mean, but instead, you are thinking as the experiment is repeated more and more the relative frequency of you finding your own number >5 would approach 95%. What I am saying is this belief requires an assumption about treating the “I” as a random sample. Whereas for the non-anthropic problem, it doesn’t.