Say Tim states, “There is a 20% probability that X will occur”. It’s not obvious to me what that means for Bayesians.
It could mean:
Tim’s prior is that there’s a 20% chance. (Or his posterior in the context of evidence)
Tim believes that when the listeners update on him saying there’s a 20% chance (perhaps with him providing insight in his thinking), their posterior will converge to there being a 20% chance.
Tim believes that the posterior of listeners may not immediately converge to 20%, but the posterior of the enlightened versions of these listeners would. Perhaps the listeners are 6th graders who won’t know how much to update, but if they learned enough, they would converge to 20%.
I’ve heard some more formalized proposals like, “I estimate that if I and several other well respected people thought about this for 100 years, we would wind up estimating that there was a 20% chance”, but even this assumes that listeners would converge on this same belief. This seems like possibly a significant assumption! It’s quite to Coherent Extrapolated Volition, and similarly questionable.
It’s definitely the first. The second is bizarre. The third can be steelmanned as “Given my evidence, an ideal thinker would estimate the probability to be 20%, and we all here have approximately the same evidence, so we all should have 20% probabilities”, which is almost the same as the first.
I don’t think it’s only the first. It seems weird to me imagine telling to a group that “There’s a 20% probability that X will occur” if I really have little idea and would guess many of them would have a better sense than me. I would only personally feel comfortable doing this if I was quite sure my information was quite a bit better than theirs. Else, I’d say something like, “I personally think there’s a 20% chance, but I really don’t have much information.”
I think my current best guess to this is something like:
When humans say thing X, they don’t mean the literal translation of X, but rather are pointing to X’, which is a specific symbol that other humans generally understand. For instance, “How are you” is a greeting, not typically a literal question. [How Are You] can be thought of as a symbol that’s very different than the sum of it’s parts.
That said, I find it quite interesting that the basics of human use of language seem to be relatively poorly understood; in the sense that I’d expect many people to disagree on what they think “There is a 20% probability that X will occur” means, even after using it with each other in a setting that assumes some amount of understanding.
I take it to mean that if Tim is acting optimally and has to take a bet on the outcome 1:4 would be the point where both sides of the bad are equally profitable to him while if the odds deviate from 1:4 one side of the bet would be preferable to him.
One thing this wouldn’t take into account is strength or weight of evidence. If Tim knew that all of the listeners had far more information than him, and thus probably could produce better estimates of X, then it seems strange for Tim to tell them that the chances are 20%.
I guess my claim that saying “There is a 20% probability that X will occur” is more similar to: “I’m quite confident that the chances are 20%, and you should generally be too” than it is to, “I personally believe that the chances are 20%, but have no idea o how much that should update the rest of you.”
Other things that Tim might mean when he says 20%:
Tim is being dishonest, and believes that the listeners will update away from the radical and low-status figure of 20% to avoid being associated with the lowly Tim.
Tim believes that other listeners will be encouraged to make their own probability estimates with explicit reasoning in response, which will make their expertise more legible to Tim and other listeners.
Tim wants to show cultural allegiance with the Superforecasting tribe.
Say Tim states, “There is a 20% probability that X will occur”. It’s not obvious to me what that means for Bayesians.
It could mean:
Tim’s prior is that there’s a 20% chance. (Or his posterior in the context of evidence)
Tim believes that when the listeners update on him saying there’s a 20% chance (perhaps with him providing insight in his thinking), their posterior will converge to there being a 20% chance.
Tim believes that the posterior of listeners may not immediately converge to 20%, but the posterior of the enlightened versions of these listeners would. Perhaps the listeners are 6th graders who won’t know how much to update, but if they learned enough, they would converge to 20%.
I’ve heard some more formalized proposals like, “I estimate that if I and several other well respected people thought about this for 100 years, we would wind up estimating that there was a 20% chance”, but even this assumes that listeners would converge on this same belief. This seems like possibly a significant assumption! It’s quite to Coherent Extrapolated Volition, and similarly questionable.
It’s definitely the first. The second is bizarre. The third can be steelmanned as “Given my evidence, an ideal thinker would estimate the probability to be 20%, and we all here have approximately the same evidence, so we all should have 20% probabilities”, which is almost the same as the first.
I don’t think it’s only the first. It seems weird to me imagine telling to a group that “There’s a 20% probability that X will occur” if I really have little idea and would guess many of them would have a better sense than me. I would only personally feel comfortable doing this if I was quite sure my information was quite a bit better than theirs. Else, I’d say something like, “I personally think there’s a 20% chance, but I really don’t have much information.”
I think my current best guess to this is something like:
When humans say thing X, they don’t mean the literal translation of X, but rather are pointing to X’, which is a specific symbol that other humans generally understand. For instance, “How are you” is a greeting, not typically a literal question. [How Are You] can be thought of as a symbol that’s very different than the sum of it’s parts.
That said, I find it quite interesting that the basics of human use of language seem to be relatively poorly understood; in the sense that I’d expect many people to disagree on what they think “There is a 20% probability that X will occur” means, even after using it with each other in a setting that assumes some amount of understanding.
I take it to mean that if Tim is acting optimally and has to take a bet on the outcome 1:4 would be the point where both sides of the bad are equally profitable to him while if the odds deviate from 1:4 one side of the bet would be preferable to him.
One thing this wouldn’t take into account is strength or weight of evidence. If Tim knew that all of the listeners had far more information than him, and thus probably could produce better estimates of X, then it seems strange for Tim to tell them that the chances are 20%.
I guess my claim that saying “There is a 20% probability that X will occur” is more similar to: “I’m quite confident that the chances are 20%, and you should generally be too” than it is to, “I personally believe that the chances are 20%, but have no idea o how much that should update the rest of you.”
Other things that Tim might mean when he says 20%:
Tim is being dishonest, and believes that the listeners will update away from the radical and low-status figure of 20% to avoid being associated with the lowly Tim.
Tim believes that other listeners will be encouraged to make their own probability estimates with explicit reasoning in response, which will make their expertise more legible to Tim and other listeners.
Tim wants to show cultural allegiance with the Superforecasting tribe.