I want you to pin yourself down a little bit. What will you concede if you find out you are wrong about this? Will you concede a lot or almost nothing? Will you regard it as important and be glad, or will you be annoyed and bored? Will you learn much? Will your faith in Bayes be shaken? What do you think is at stake here?
It annoys me a lot when people do this, because I can be wrong in many different ways. If I give a maths proof, then say I cannot see how it could be wrong, someone else might come up and ask me if I will give up my trust (trust, not faith, is what I have in Bayes by the way) in maths. When they reveal why I am wrong, it turns out I just made a mathematical error, I have learnt that I need to be more careful, not that maths is wrong.
I am confident enough in that statement that I would be interested to find out why you think it is wrong.
If the way in which you prove me wrong turns out to be interesting and important, rather than a technical detail or a single place where I said something I didn’t mean, then it will likely cause a significant change in my world view. I will not just immediately switch to Popper, there are more than two alternatives after all, and I may well not give up on Bayes. This isn’t a central tenet of Bayesian decision theory, (although it is a central tenet of instrumental rationality), so it won’t refute the whole theory.
My most likely response, if you really can show that more than prediction is required, is to acknowledge that at least one component of the complete Bayesian epistemology is still missing. It would not surprise me, although it would surprise me to find that this specific thing was what was missing.
No, I’m pretty sure that [my position is true]
I’m not asserting that I could not possibly be wrong, that P(I am wrong) = 0. All I am saying is that I feel pretty sure about this, which I do.
It annoys me a lot when people do this, because I can be wrong in many different ways. If I give a maths proof, then say I cannot see how it could be wrong, someone else might come up and ask me if I will give up my trust (trust, not faith, is what I have in Bayes by the way) in maths. When they reveal why I am wrong, it turns out I just made a mathematical error, I have learnt that I need to be more careful, not that maths is wrong.
I am confident enough in that statement that I would be interested to find out why you think it is wrong.
If the way in which you prove me wrong turns out to be interesting and important, rather than a technical detail or a single place where I said something I didn’t mean, then it will likely cause a significant change in my world view. I will not just immediately switch to Popper, there are more than two alternatives after all, and I may well not give up on Bayes. This isn’t a central tenet of Bayesian decision theory, (although it is a central tenet of instrumental rationality), so it won’t refute the whole theory.
My most likely response, if you really can show that more than prediction is required, is to acknowledge that at least one component of the complete Bayesian epistemology is still missing. It would not surprise me, although it would surprise me to find that this specific thing was what was missing.
I’m not asserting that I could not possibly be wrong, that P(I am wrong) = 0. All I am saying is that I feel pretty sure about this, which I do.