If the correlation is small, your detectors suck. I doubt that’s really what’s happening. The usual situation is that both detectors actually have some correlation to Q, and thereby have some correlation to each other.
The way I interpreted the claim of independence is that the verdicts of the experts are not correlated once you conditionalize on Q. If that is the case, then DanielLC’s procedure gives the correct answer.
To see this more explicitly, suppose that expert A’s verdict is based on evidence Ea and expert B’s verdict is based on evidence Eb. The independence assumption is that P(Ea & Eb|Q) = P(Ea|Q) * P(Eb|Q).
Since we know the posteriors P(Q|Ea) and P(Q|Eb), and we know the prior of Q, we can calculate the likelihood ratios for Ea and Eb. The independence assumption allows us to multiply these likelihood ratios together to obtain a likelihood ratio for the combined evidence Ea & Eb. We then multiply this likelihood ratio with the prior odds to obtain the correct posterior odds.
To see this more explicitly, suppose that expert A’s verdict is based on evidence Ea and expert B’s verdict is based on evidence Eb. The independence assumption is that P(Ea & Eb|Q) = P(Ea|Q) * P(Eb|Q).
You can write that, and it’s likely possible in some cases, but step back and think, Does this really make sense to say in the general case?
I just don’t think so. The whole problem with mixture of experts, or combining multiple data sources, is that the marginals are not in general independent.
Sure, it’s not generically true, but PhilGoetz is thinking about a specific application in which he claims that it is justified to regard the expert estimates as independent (conditional on Q, of course). I don’t know enough about the relevant domain to assess his claim, but I’m willing to take him at his word.
I was just responding to your claim that the detectors must suck if the correlation is small. That would be true if the unconditional correlation were small, but its not true if the correlation is small conditional on Q.
The way I interpreted the claim of independence is that the verdicts of the experts are not correlated once you conditionalize on Q. If that is the case, then DanielLC’s procedure gives the correct answer.
To see this more explicitly, suppose that expert A’s verdict is based on evidence Ea and expert B’s verdict is based on evidence Eb. The independence assumption is that P(Ea & Eb|Q) = P(Ea|Q) * P(Eb|Q).
Since we know the posteriors P(Q|Ea) and P(Q|Eb), and we know the prior of Q, we can calculate the likelihood ratios for Ea and Eb. The independence assumption allows us to multiply these likelihood ratios together to obtain a likelihood ratio for the combined evidence Ea & Eb. We then multiply this likelihood ratio with the prior odds to obtain the correct posterior odds.
You can write that, and it’s likely possible in some cases, but step back and think, Does this really make sense to say in the general case?
I just don’t think so. The whole problem with mixture of experts, or combining multiple data sources, is that the marginals are not in general independent.
Sure, it’s not generically true, but PhilGoetz is thinking about a specific application in which he claims that it is justified to regard the expert estimates as independent (conditional on Q, of course). I don’t know enough about the relevant domain to assess his claim, but I’m willing to take him at his word.
I was just responding to your claim that the detectors must suck if the correlation is small. That would be true if the unconditional correlation were small, but its not true if the correlation is small conditional on Q.