There’s too much incentive to overestimate your own p(doom) when talking to people that have lower p(doom) than yours and no way to check if the probabilities are accurate. (This could also apply to underestimating, although i think incentives are less strong)
This problem seems pretty tough to solve to me, does anyone have an idea on what possible solutions could look like?
Oh i probably should have specified that the incentive im referring to is getting people to adjust their own beliefs, not stuff like judgement.
Lemme give an example:
Your friend has p(doom) = 1% you have p(doom) = 10%. You overestimate your belief to 20% to get your friend closer to your truly held estimate. With the hope that the larger the difference, the more your friend will revise up based on your judgement
OK, but isn’t this symmetric? Doesn’t your friend have an incentive to report p(doom) = 0.1% in the hope that you’ll revise down based on their judgment?
At any rate I think the overall balance of incentives differs from person to person and from friendgroup to friendgroup. At my workplace at least it sure feels like the incentives push towards lower p(doom) than I have.
I think that there might be personal or professional incentives towards underestimating or overestimating based on situation, but the moral incentive will always be towards exaggerating your belief.
I feel like there would be a stronger moral incentive for the high doom people to exaggerate, but im having a hard time putting this belief into words.
There’s too much incentive to overestimate your own p(doom) when talking to people that have lower p(doom) than yours and no way to check if the probabilities are accurate. (This could also apply to underestimating, although i think incentives are less strong)
This problem seems pretty tough to solve to me, does anyone have an idea on what possible solutions could look like?
Huh, in my circles at least it seems like the incentive goes in the opposite direction. I get judged for having a high p(doom).
Oh i probably should have specified that the incentive im referring to is getting people to adjust their own beliefs, not stuff like judgement.
Lemme give an example:
Your friend has p(doom) = 1% you have p(doom) = 10%. You overestimate your belief to 20% to get your friend closer to your truly held estimate. With the hope that the larger the difference, the more your friend will revise up based on your judgement
OK, but isn’t this symmetric? Doesn’t your friend have an incentive to report p(doom) = 0.1% in the hope that you’ll revise down based on their judgment?
At any rate I think the overall balance of incentives differs from person to person and from friendgroup to friendgroup. At my workplace at least it sure feels like the incentives push towards lower p(doom) than I have.
I think that there might be personal or professional incentives towards underestimating or overestimating based on situation, but the moral incentive will always be towards exaggerating your belief.
I feel like there would be a stronger moral incentive for the high doom people to exaggerate, but im having a hard time putting this belief into words.