If you say that then you’re conceding the point, because Y is nothing other than the conjunction of a carefully chosen subset of the trivial statements comprising Z, re-ordered so as to give a proof that can easily be followed.
Figuring out how to reorder them requires mathematical knowledge, a special kind of knowledge that can be generated, not just through contact with the external world, but through spending computer cycles on it.
Yes. It’s therefore important to quantify how many computer cycles and other resources are involved in computing a prior. There is a souped-up version of taw’s argument along those lines: either P = NP, or else every prior that is computable in polynomial time will fall for the conjunction fallacy.
If you want to make it a bit less unrealistic, imagine there are only, say, 1000 difficult proofs randomly chopped and spliced rather than a gazillion—but still too many for the subject to make head or tail of. Perhaps imagine them adding up to a book about the size of the Bible, which a person can memorize in its entirety given sufficient determination.
The Bayesian doesn’t know Z is stronger than Y. He can’t even read all of Z. Or if you compress it, he can’t decompress it.
P(Y|Z)<1.
If you say that then you’re conceding the point, because Y is nothing other than the conjunction of a carefully chosen subset of the trivial statements comprising Z, re-ordered so as to give a proof that can easily be followed.
Figuring out how to reorder them requires mathematical knowledge, a special kind of knowledge that can be generated, not just through contact with the external world, but through spending computer cycles on it.
Yes. It’s therefore important to quantify how many computer cycles and other resources are involved in computing a prior. There is a souped-up version of taw’s argument along those lines: either P = NP, or else every prior that is computable in polynomial time will fall for the conjunction fallacy.
Imagine he has read and memorized all of Z.
If you want to make it a bit less unrealistic, imagine there are only, say, 1000 difficult proofs randomly chopped and spliced rather than a gazillion—but still too many for the subject to make head or tail of. Perhaps imagine them adding up to a book about the size of the Bible, which a person can memorize in its entirety given sufficient determination.
Oh I see. Chopped and spliced. That makes more sense. I missed that in your previous comment.
The Bayesian still does not know that Z implies Y, because he has not found Y in Z, so there still isn’t a problem.