Trying to understand your second picture, when comparing P(A|B) and L(B|A), there should be at least one value at which they agree, right?
Ahh, I knew there was a bit of a risk there… I didn’t really start with a conditional distribution and pull out rows/columns, I just made up numbers to illustrate the summing to 1 / not summing to 1 distinction. Fortunately, as you mentioned, I happened to place one matching number in the two, allowing us to deduce that I’m illustrating the probabilities of different A given B=b2, and then the likelihoods of different B given A=a1, as you demonstrated.
This looks like a very interesting theory, and the two examples given are pretty convincing. So I’m curious whether you know of cases where it fails to explain the bias you mentioned?
No, but I must admit that I haven’t read deeply into the literature.
Ahh, I knew there was a bit of a risk there… I didn’t really start with a conditional distribution and pull out rows/columns, I just made up numbers to illustrate the summing to 1 / not summing to 1 distinction. Fortunately, as you mentioned, I happened to place one matching number in the two, allowing us to deduce that I’m illustrating the probabilities of different A given B=b2, and then the likelihoods of different B given A=a1, as you demonstrated.
No, but I must admit that I haven’t read deeply into the literature.
Thanks!