One of the points that I was trying to make is that you can’t apply anthropic reasoning like that. That is, you need to be comparative, to start with at least two models, then update on your anthropic data. As an analogy, I might be able to give you very good reasons for believing that theory A would explain a phenomena, but if theory B explains it better, then we should go with theory B. There are many cases where we can obscure this by talking exclusively about theory A.
So the question is not does 1) explain the situation well, but does 1) explain the situation better than 3), taking into account things such as prior probabilities.
Update: On second thought, multi-worlds is a pretty good answer when combined with the anthropic principle. I suppose that my argument then only shows that case 2) isn’t a very good explanation.
One of the points that I was trying to make is that you can’t apply anthropic reasoning like that. That is, you need to be comparative, to start with at least two models, then update on your anthropic data. As an analogy, I might be able to give you very good reasons for believing that theory A would explain a phenomena, but if theory B explains it better, then we should go with theory B. There are many cases where we can obscure this by talking exclusively about theory A.
So the question is not does 1) explain the situation well, but does 1) explain the situation better than 3), taking into account things such as prior probabilities.
Update: On second thought, multi-worlds is a pretty good answer when combined with the anthropic principle. I suppose that my argument then only shows that case 2) isn’t a very good explanation.
I took it as too-obvious-to-mention that 2 & 3 explain the situation just fine, but have massive complexity penalties.