I agree that skepticism is appropriate, but I don’t think just ignoring anthropic reasoning completely is an answer. If we want to make decisions on an issue where anthropics is relevant, then we have to have a way of coming up with probabilistic estimates about these questions somehow. Whatever framework you use to do that, you will be taking some stance on anthropic reasoning. Once you’re dealing with an anthropic question, there is no such thing as a non-anthropic framework that you can fall back on instead (I tried to make that clear in the boy-girl example discussed in the post).
The answer could just be extreme pessimism: maybe there just is no good way of making decisions about these questions. But that seems like it goes too far. If you needed to estimate the probability that your DNA contained a certain genetic mutation that affected about 30% of the population, then I think 30% really would be a good estimate to go for (absent any other information). I think it’s something all of us would be perfectly happy doing. But you’re technically invoking the self-sampling assumption there. Strictly speaking, that’s an anthropic question. It concerns indexical information (“*I* have this mutation”). If you like, you’re making the assumption that someone without that mutation would still be in your observer reference class.
Once you’ve allowed a conclusion like that, then you have to let someone use Bayes rule on it. i.e. if they learn that they do have a particular mutation, then hypotheses that would make that mutation more prevalent should be considered more likely. Now you’re doing anthropics proper. There is nothing conceptually which distinguishes this from the chain of reasoning used in the Doomsday Argument.
If we want to make decisions on an issue where anthropics is relevant
This is probably our crux. I don’t think there are any issues where anthropics are relevant, because I don’t think there is any evidence about the underlying distribution which would enable updating based on an anthropic observation.
I agree that skepticism is appropriate, but I don’t think just ignoring anthropic reasoning completely is an answer. If we want to make decisions on an issue where anthropics is relevant, then we have to have a way of coming up with probabilistic estimates about these questions somehow. Whatever framework you use to do that, you will be taking some stance on anthropic reasoning. Once you’re dealing with an anthropic question, there is no such thing as a non-anthropic framework that you can fall back on instead (I tried to make that clear in the boy-girl example discussed in the post).
The answer could just be extreme pessimism: maybe there just is no good way of making decisions about these questions. But that seems like it goes too far. If you needed to estimate the probability that your DNA contained a certain genetic mutation that affected about 30% of the population, then I think 30% really would be a good estimate to go for (absent any other information). I think it’s something all of us would be perfectly happy doing. But you’re technically invoking the self-sampling assumption there. Strictly speaking, that’s an anthropic question. It concerns indexical information (“*I* have this mutation”). If you like, you’re making the assumption that someone without that mutation would still be in your observer reference class.
Once you’ve allowed a conclusion like that, then you have to let someone use Bayes rule on it. i.e. if they learn that they do have a particular mutation, then hypotheses that would make that mutation more prevalent should be considered more likely. Now you’re doing anthropics proper. There is nothing conceptually which distinguishes this from the chain of reasoning used in the Doomsday Argument.
This is probably our crux. I don’t think there are any issues where anthropics are relevant, because I don’t think there is any evidence about the underlying distribution which would enable updating based on an anthropic observation.