So, anthropic reasoning involves using facts about how the observer came into being to “explain” certain supposed coincidences and thereby not give so much weight to alternative hypotheses which might need to be invoked to explain the coincidence.
In this case, there is a coincidence between us asserting that rationality is good for us, and us being the first generation out of a long line of humans for whom this is the case. (and, indeed, the same argument applies spatially as temporally; rationality is probably a bad move for many very disadvantaged people in the world today).
The alternative hypothesis under consideration is “rationality is not good for you, you are just rationalizing”.
So, I assume that I am sampled from the set of people who ask the question “is it optimal to be rational, or to delude myself?”. What is the probability of me answering “yes”? Well, JulianMorrison argues (correctly, IMO) that there is a systematic correlation between being able to ask the question and answering “yes”, so the probability is not worryingly small. Nothing unusual has happened here.
So we should not be suspicious that we are rationalizing just because we answered “yes”.
Secondly, what is the probability of me finding myself to be the first (or second) generation of humans for which the answer to this question is “yes”? In the case where there are zillions of similar humans in the future, this probability could be very small. But… there’s no interesting alternative hypothesis to explain this coincidence, so we can’t conclude anything particularly interesting.
Yeah, you’re basically making the doomsday argument. Note that you could use the same reasoning about any question that you expect to come up from time to time, for instance “do I like cheese?”
So, anthropic reasoning involves using facts about how the observer came into being to “explain” certain supposed coincidences and thereby not give so much weight to alternative hypotheses which might need to be invoked to explain the coincidence.
In this case, there is a coincidence between us asserting that rationality is good for us, and us being the first generation out of a long line of humans for whom this is the case. (and, indeed, the same argument applies spatially as temporally; rationality is probably a bad move for many very disadvantaged people in the world today).
The alternative hypothesis under consideration is “rationality is not good for you, you are just rationalizing”.
So, I assume that I am sampled from the set of people who ask the question “is it optimal to be rational, or to delude myself?”. What is the probability of me answering “yes”? Well, JulianMorrison argues (correctly, IMO) that there is a systematic correlation between being able to ask the question and answering “yes”, so the probability is not worryingly small. Nothing unusual has happened here.
So we should not be suspicious that we are rationalizing just because we answered “yes”.
Secondly, what is the probability of me finding myself to be the first (or second) generation of humans for which the answer to this question is “yes”? In the case where there are zillions of similar humans in the future, this probability could be very small. But… there’s no interesting alternative hypothesis to explain this coincidence, so we can’t conclude anything particularly interesting.
Yeah, you’re basically making the doomsday argument. Note that you could use the same reasoning about any question that you expect to come up from time to time, for instance “do I like cheese?”
Correct. I’ve edited my comment since you commented. Read the corrected version and critique…
Please reread my post. I think I was editing while you were reading my post.