The only thing that matters is making successful predictions. How they smell doesn’t.
To know at whether a method makes successful predictions you calibrate the method against other data. That then gives you an idea about how accurate your predictions happen to be.
Depending on the purpose for which you need the numbers different amounts of accuracy is good enough.
I’m not making some Pascal mugging argument that people are supposed to care more about Zeus where I need to know the difference between 10^{-15} and 10^{-16}. I made an argument about how many orders of magnitude my beliefs should be swayed.
My current belief in the probability of Zeus is uncertain enough that I have no idea if it changed by orders of magnitude, and I am very surprised that you seem to think the probability is in a narrow enough range that claiming to have increased it by order of magnitude becomes meaningful.
The only thing that matters is making successful predictions. How they smell doesn’t. To know at whether a method makes successful predictions you calibrate the method against other data. That then gives you an idea about how accurate your predictions happen to be.
Depending on the purpose for which you need the numbers different amounts of accuracy is good enough. I’m not making some Pascal mugging argument that people are supposed to care more about Zeus where I need to know the difference between 10^{-15} and 10^{-16}. I made an argument about how many orders of magnitude my beliefs should be swayed.
My current belief in the probability of Zeus is uncertain enough that I have no idea if it changed by orders of magnitude, and I am very surprised that you seem to think the probability is in a narrow enough range that claiming to have increased it by order of magnitude becomes meaningful.
You can compute the likelihood ratio without knowing the absolute probability.
Being surprised is generally a sign that it’s useful to update a belief.
I would add that given my model of you it doesn’t surprise me that this surprises you.