There are unsupervised methods, if you have unlabeled data, which I suspect you do. I don’t know about standard methods, but here are a few simple ideas off the top of my head:
First, you can check if A is consistent with the prior by seeing that average probability it predicts over your data is your prior for Q. If not, there are a lot of possible failure modes, such as your new data being different from the data used to set your prior, or A being wrong or miscalibrated. If I trusted the prior a lot and wanted to fix the problem, I would scale the evidence (the odds ratio of A from the prior) by a constant.
You can apply the same test to the joint prediction. If A and B each produce the right frequency, but their joint prediction does not, then they are correlated. It is probably worth doing this, as a check on your assumption of independence. You might try to correct for this correlation by scaling the joint evidence, the same way I suggested scaling a single test. (Note that if A=B, scaling is the correct answer.)
But if you have many tests and you correct each pair, it is no longer clear how to combine all of them. One simple answers is to drop tests in highly correlated pairs and assume everything that else is independent. To salvage some information rather than dropping tests, you might cluster tests into correlated groups, use scaling to correct within clusters and assume the clusters are independent.
There are unsupervised methods, if you have unlabeled data, which I suspect you do. I don’t know about standard methods, but here are a few simple ideas off the top of my head:
First, you can check if A is consistent with the prior by seeing that average probability it predicts over your data is your prior for Q. If not, there are a lot of possible failure modes, such as your new data being different from the data used to set your prior, or A being wrong or miscalibrated. If I trusted the prior a lot and wanted to fix the problem, I would scale the evidence (the odds ratio of A from the prior) by a constant.
You can apply the same test to the joint prediction. If A and B each produce the right frequency, but their joint prediction does not, then they are correlated. It is probably worth doing this, as a check on your assumption of independence. You might try to correct for this correlation by scaling the joint evidence, the same way I suggested scaling a single test. (Note that if A=B, scaling is the correct answer.)
But if you have many tests and you correct each pair, it is no longer clear how to combine all of them. One simple answers is to drop tests in highly correlated pairs and assume everything that else is independent. To salvage some information rather than dropping tests, you might cluster tests into correlated groups, use scaling to correct within clusters and assume the clusters are independent.