(Anna: you write lots of great stuff; link it up!).
It was written about a year ago, but it’s actually a good follow up to this post. The point is that, ideally, people would share raw observations. But sometimes that’s too slow, so instead we should share a form of summarized evidence. Sharing opinions is a noisy way to do that, because other peoples’ prior beliefs get needlessly mixed in with the observations, and then with your opinion, just like the sort of agreement ritual Anna describes here.
It’s much better if rational people share Bayes factors from independent tests. That is, you ask your friends “By what multiplicative factor did your priors increase”? Anna gives a rough example of two rational friends with cynical and optimistic priors, who know a third party John in different contexts (i.e. substantially independent tests). If the optimist says “John is a terrible person”, the cynic, knowing the optimist’s priors, can tell there must have been a significant update, i.e. a large Bayes factor, hence significant evidence, that John is really a terrible person. But if the cynic said that, the optimist wouldn’t learn much.
This doesn’t work as easily if you share common observations with your friends (rendering your tests non-independent) and consider your character judgement updates relatively unsusceptible to computational errors. In that case, you have to abstract what parts of your current opinion comes from the unshared observations, estimate the Bayes factor (significance of evidence) for those observations only, and share those factors instead. Or just resort to describing the unshared observations explicitly.
Anyone who hasn’t already, check out Anna’s OB post, Share likelihood ratios, not posterior beliefs.
(Anna: you write lots of great stuff; link it up!).
It was written about a year ago, but it’s actually a good follow up to this post. The point is that, ideally, people would share raw observations. But sometimes that’s too slow, so instead we should share a form of summarized evidence. Sharing opinions is a noisy way to do that, because other peoples’ prior beliefs get needlessly mixed in with the observations, and then with your opinion, just like the sort of agreement ritual Anna describes here.
It’s much better if rational people share Bayes factors from independent tests. That is, you ask your friends “By what multiplicative factor did your priors increase”? Anna gives a rough example of two rational friends with cynical and optimistic priors, who know a third party John in different contexts (i.e. substantially independent tests). If the optimist says “John is a terrible person”, the cynic, knowing the optimist’s priors, can tell there must have been a significant update, i.e. a large Bayes factor, hence significant evidence, that John is really a terrible person. But if the cynic said that, the optimist wouldn’t learn much.
This doesn’t work as easily if you share common observations with your friends (rendering your tests non-independent) and consider your character judgement updates relatively unsusceptible to computational errors. In that case, you have to abstract what parts of your current opinion comes from the unshared observations, estimate the Bayes factor (significance of evidence) for those observations only, and share those factors instead. Or just resort to describing the unshared observations explicitly.