“I don’t know” is an effective way of communicating “I probably don’t have any more information on the subject then you do”. That itself is both a useful and a meaningful thing to communicate.
Or, to put it another way; if EY said to me “there are between 10 and 1000 apples on that tree”, then I would use that assessment to update my Bayesian probability of how many apples are likely to be on the tree. However, if EY does not have any more information on the subject then I do, then updating my Bayesian probability based on his statement adds nothing but randomness to my answer.
His answer isn’t random. It’s based on his knowledge of apple trees in bloom (he states later that he assumed the tree was an apple tree in bloom). If you knew nothing about apple trees, or knew less than he did, or knew different but no more reliable information than he did, or were less able to correctly interpret what information you did have, then you would have learned something from him. If you had all the information he did, and believed that he was a rationalist and at the least not worse at coming to the right answer than you, and you had a different estimate than he did, then you still ought to update towards his estimate (Aumann’s Agreement Theorem).
This does illustrate the point that simply stating your final probability distribution isn’t really sufficient to tell everything you know. Not surprisingly, you can’t compress much past the actual original evidence without suffering at least some amount of information loss. How important this loss is depends on the domain in question. It is difficult to come up with a general algorithm for useful information transfer even just between rationalists, and you cannot really do it at all with someone who doesn’t know probability theory.
I’m not sure that I agree with this.
“I don’t know” is an effective way of communicating “I probably don’t have any more information on the subject then you do”. That itself is both a useful and a meaningful thing to communicate.
Or, to put it another way; if EY said to me “there are between 10 and 1000 apples on that tree”, then I would use that assessment to update my Bayesian probability of how many apples are likely to be on the tree. However, if EY does not have any more information on the subject then I do, then updating my Bayesian probability based on his statement adds nothing but randomness to my answer.
His answer isn’t random. It’s based on his knowledge of apple trees in bloom (he states later that he assumed the tree was an apple tree in bloom). If you knew nothing about apple trees, or knew less than he did, or knew different but no more reliable information than he did, or were less able to correctly interpret what information you did have, then you would have learned something from him. If you had all the information he did, and believed that he was a rationalist and at the least not worse at coming to the right answer than you, and you had a different estimate than he did, then you still ought to update towards his estimate (Aumann’s Agreement Theorem).
This does illustrate the point that simply stating your final probability distribution isn’t really sufficient to tell everything you know. Not surprisingly, you can’t compress much past the actual original evidence without suffering at least some amount of information loss. How important this loss is depends on the domain in question. It is difficult to come up with a general algorithm for useful information transfer even just between rationalists, and you cannot really do it at all with someone who doesn’t know probability theory.