I think most points here are good points to make, but I also think it’s useful as a general caution against this type of exercise being used as an argument at all! So I’d obviously caution against anyone taking your response itself as a reasonable attempt at an estimate of the “correct” Bayes factors, because this is all very bad epistemic practice! Public explanations and arguments are social claims, and usually contain heavily filtered evidence (even if unconsciously). Don’t do this in public.
That is, this type of informal Bayesian estimate is useful as part of a ritual for changing your own mind, when done carefully. That requires a significant degree of self-composure, a willingness to change one’s mind, and a high degree of justified confidence n your own mastery of unbiased reasoning.
Here, though, it is presented as an argument, which is not how any of this should work. And in this case, it was written by someone who already had a strong view of what the outcome should be, repeated publicly frequently, which makes it doubly hard to accept the implicit necessary claim that it was performed starting from an unbiased point at face value! At the very least, we need strong evidence that it was not an exercise in motivated reasoning, that the bottom line wasn’t written before the evaluation started—which statement is completely missing, though to be fair, it would be unbelievable if it had been stated.
This whole thing reminds me of Scott Alexander’s Pyramid essay. That seems like a really good case where it seems like there’s a natural statistical reference class, seems like you can easily get a giant Bayes factor that’s “statistically well justified”, and to all the counterarguments you can say “well the likelihood is 1 in 10^5 that the pyramids would have a latitude that matches to the speed of light in m/s”. That’s a good reductio for taking even fairly well justified sounding subjective bayes factors at face value.
And I think that it’s built into your criticism that because the problem is social and hidden evidence filtering going on, there will also tend to be an explanation on the meta-level too for why my coincidence finding is different from your coincidence finding.
To make sure I understand your point… the “Bayes Factors” I give like 1/ 1 million aren’t meant to be taken literally. Rather they’re to show how easy it is to get a high BF in this case, if you do a very quick analysis that doesn’t account for details. I don’t expect this post, on its own, to convince anyone of the zoonotic origin hypothesis.
Yeah, but I think that it’s more than not taken literally, it’s that the exercise is fundamentally flawed when being used as an argument instead of very narrowly for honest truth-seeking, which is almost never possible in a discussion without unreasonably high levels of trust and confidence in others’ epistemic reliability.
Thank you for writing this.
I think most points here are good points to make, but I also think it’s useful as a general caution against this type of exercise being used as an argument at all! So I’d obviously caution against anyone taking your response itself as a reasonable attempt at an estimate of the “correct” Bayes factors, because this is all very bad epistemic practice! Public explanations and arguments are social claims, and usually contain heavily filtered evidence (even if unconsciously). Don’t do this in public.
That is, this type of informal Bayesian estimate is useful as part of a ritual for changing your own mind, when done carefully. That requires a significant degree of self-composure, a willingness to change one’s mind, and a high degree of justified confidence n your own mastery of unbiased reasoning.
Here, though, it is presented as an argument, which is not how any of this should work. And in this case, it was written by someone who already had a strong view of what the outcome should be, repeated publicly frequently, which makes it doubly hard to accept the implicit necessary claim that it was performed starting from an unbiased point at face value! At the very least, we need strong evidence that it was not an exercise in motivated reasoning, that the bottom line wasn’t written before the evaluation started—which statement is completely missing, though to be fair, it would be unbelievable if it had been stated.
This whole thing reminds me of Scott Alexander’s Pyramid essay. That seems like a really good case where it seems like there’s a natural statistical reference class, seems like you can easily get a giant Bayes factor that’s “statistically well justified”, and to all the counterarguments you can say “well the likelihood is 1 in 10^5 that the pyramids would have a latitude that matches to the speed of light in m/s”. That’s a good reductio for taking even fairly well justified sounding subjective bayes factors at face value.
And I think that it’s built into your criticism that because the problem is social and hidden evidence filtering going on, there will also tend to be an explanation on the meta-level too for why my coincidence finding is different from your coincidence finding.
To make sure I understand your point… the “Bayes Factors” I give like 1/ 1 million aren’t meant to be taken literally. Rather they’re to show how easy it is to get a high BF in this case, if you do a very quick analysis that doesn’t account for details. I don’t expect this post, on its own, to convince anyone of the zoonotic origin hypothesis.
Yeah, but I think that it’s more than not taken literally, it’s that the exercise is fundamentally flawed when being used as an argument instead of very narrowly for honest truth-seeking, which is almost never possible in a discussion without unreasonably high levels of trust and confidence in others’ epistemic reliability.