The usual situation is that both detectors actually have some correlation to Q, and thereby have some correlation to each other.
This need not be the case. Consider a random variable Z that is the sum of two random independent variables X and Y. Expert A knows X, and is thus correlated with Z. Expert B knows Y and is thus correlated with Z. Expert A and B can still be uncorrelated. In fact, you can make X and Y slightly anticorrelated, and still have them both be positively correlated with Z.
Just consider the limiting case—both are perfect predictors of Q, with value 1 for Q, and value 0 for not Q. And therefore, perfectly correlated.
Consider small deviations from those perfect predictors. The correlation would still be large. Sometimes more, sometimes less, depending on the details of both predictors. Sometimes they will be more correlated with each other than with Q, sometimes more correlated with Q than each other. The degree of correlation with of A and B with Q will impose limits on the degree of correlation between A and B.
And of course, correlation isn’t really the issue here anyway, much more like mutual information, with the same sort of triangle inequality limits to the mutual information.
If someone is feeling energetic and really wants to work this our, I’d recommend looking into triangle inequalities for mutual information measures, and the previously mentioned work by Jaynes on the maximum entropy estimate of a variable from it’s known correlation with two other variables, and how that constrains the maximum entropy estimate of the correlation between the other two.
This need not be the case. Consider a random variable Z that is the sum of two random independent variables X and Y. Expert A knows X, and is thus correlated with Z. Expert B knows Y and is thus correlated with Z. Expert A and B can still be uncorrelated. In fact, you can make X and Y slightly anticorrelated, and still have them both be positively correlated with Z.
Just consider the limiting case—both are perfect predictors of Q, with value 1 for Q, and value 0 for not Q. And therefore, perfectly correlated.
Consider small deviations from those perfect predictors. The correlation would still be large. Sometimes more, sometimes less, depending on the details of both predictors. Sometimes they will be more correlated with each other than with Q, sometimes more correlated with Q than each other. The degree of correlation with of A and B with Q will impose limits on the degree of correlation between A and B.
And of course, correlation isn’t really the issue here anyway, much more like mutual information, with the same sort of triangle inequality limits to the mutual information.
If someone is feeling energetic and really wants to work this our, I’d recommend looking into triangle inequalities for mutual information measures, and the previously mentioned work by Jaynes on the maximum entropy estimate of a variable from it’s known correlation with two other variables, and how that constrains the maximum entropy estimate of the correlation between the other two.