It can be done, but it’s a lot easier if you use odds ratios, as shown in Share likelihood ratios, not posterior beliefs. That being said, experts will tend to know a lot of the same information, so using this will involve major double counting. Also, they tend to know each other’s opinions. This means that you just have to accept the opinion they’re guaranteed to share via Aumann’s agreement theorem, or more likely, you have to accept that they’re not acting rationally, and take their beliefs with a grain of salt.
In your example:
A = 7:3, B = 8:2, P(Q) = 4:6
First, calculate the odds ratio expert B has:
(8:2)/(4:6) = 48:8
= 6:1
Then just multiply that by what expert A had to update on his opinion:
It is worth noting that this can be summarized by Phil’s own suggestion:
One approach is to add up the bits of information each expert gives, with positive bits for indications that Q(O) and negative bits that not(Q(O)).
That is, you can interpret the log of the odds ratio as the evidence/information that A gives you beyond Q. Adding the evidence from A and B gives your aggregate evidence, which you add to the log odds of the Q prior to get your log odds posterior.
Wait. This doesn’t work.
If the prior is 1:2, and you have n experts also giving estimates of 1:2, you should end up with the answer 1:2.
None of the experts are providing information; they’re just throwing their hands up and spitting back the prior.
Yet this approach multiplies all their odds ratios together, so that the answer changes with n.
ADDED: Oh, wait, sorry. You’re saying take the odds ratio they output, factor out the prior for each expert, multiply, and then factor the prior back in. Expanded,
You don’t have to factor out the priors both times, since putting it back in will cancel one of them out. This is equivalent to considering how one of them will update based on the information the other has.
This is correct in the special case where all the information that your experts are basing their conclusions off of is independent. Slightly modifying your example, if the prior is 1:2 and you have n experts giving odds of 1:1, they each have a likelihood ratio of 2:1, so you get 1:2 * (2:1)^n = 2^(n-1):1. However, if they’ve all updated based on looking at the results from the same experiment, you’re double-counting the evidence; intuitively, you actually want to assign an odds ratio of 1:1.
The right thing to do here is to calculate for each subset of the experts what information they share, but you probably don’t have that information and so you’d have to estimate it, which I’d have to think a lot about in order to do well. Hopefully, the assumption of independence is approximately true in your data and you can just go with the naive method.
It can be done, but it’s a lot easier if you use odds ratios, as shown in Share likelihood ratios, not posterior beliefs. That being said, experts will tend to know a lot of the same information, so using this will involve major double counting. Also, they tend to know each other’s opinions. This means that you just have to accept the opinion they’re guaranteed to share via Aumann’s agreement theorem, or more likely, you have to accept that they’re not acting rationally, and take their beliefs with a grain of salt.
In your example:
A = 7:3, B = 8:2, P(Q) = 4:6
First, calculate the odds ratio expert B has:
(8:2)/(4:6) = 48:8
= 6:1
Then just multiply that by what expert A had to update on his opinion:
(7:3)(6:1) = 42:3
= 14:1
Thus, there’s a 14⁄15 = 93.3% chance of Q
It is worth noting that this can be summarized by Phil’s own suggestion:
That is, you can interpret the log of the odds ratio as the evidence/information that A gives you beyond Q. Adding the evidence from A and B gives your aggregate evidence, which you add to the log odds of the Q prior to get your log odds posterior.
Wait. This doesn’t work. If the prior is 1:2, and you have n experts also giving estimates of 1:2, you should end up with the answer 1:2. None of the experts are providing information; they’re just throwing their hands up and spitting back the prior. Yet this approach multiplies all their odds ratios together, so that the answer changes with n.
ADDED: Oh, wait, sorry. You’re saying take the odds ratio they output, factor out the prior for each expert, multiply, and then factor the prior back in. Expanded,
(8/2)/(4/6) (7/3)/(4/6) (4/6) = 8064⁄576, 1 / (1+1/(8064/576)) = .933
Great!
You don’t have to factor out the priors both times, since putting it back in will cancel one of them out. This is equivalent to considering how one of them will update based on the information the other has.
(8/2)/(4/6) * (7/3) = 336⁄24, 336/(336+24) = 0.933
This is correct in the special case where all the information that your experts are basing their conclusions off of is independent. Slightly modifying your example, if the prior is 1:2 and you have n experts giving odds of 1:1, they each have a likelihood ratio of 2:1, so you get 1:2 * (2:1)^n = 2^(n-1):1. However, if they’ve all updated based on looking at the results from the same experiment, you’re double-counting the evidence; intuitively, you actually want to assign an odds ratio of 1:1.
The right thing to do here is to calculate for each subset of the experts what information they share, but you probably don’t have that information and so you’d have to estimate it, which I’d have to think a lot about in order to do well. Hopefully, the assumption of independence is approximately true in your data and you can just go with the naive method.
THANKS!