Here’s how I look at it. Suppose you want to prove A, so you look for evidence until either you can prove it for p = 0.05, or it’s definitely false. Let E be this experiment proving A, and !E be disproving it. P(A|E) = 0.95, and P(A|!E) = 0. Let’s assume the prior for A is P(A) = 0.5.
P(A|E) = 0.95
P(A|!E) = 0
P(A) = 0.5
By conservation of expected evidence, P(A|E)P(E) + P(A|!E)P(!E) = P(A) = 0.5
0.95 P(E) = 0.5
P(E) = 0.526
So the experiment is more likely to succeed than fail. Even though A has even odds of being true, you can prove it more than half the time. It sounds like you’re cheating somehow, but the thing to remember is that there are false positives but no false negatives. All you’re doing is proving probably A more than definitely not A, and probably A is more likely.
But P(A|E) = 0.05. That was an assumption here. Had the probability been different, P(E) would have been different.
Here’s how I look at it. Suppose you want to prove A, so you look for evidence until either you can prove it for p = 0.05, or it’s definitely false. Let E be this experiment proving A, and !E be disproving it. P(A|E) = 0.95, and P(A|!E) = 0. Let’s assume the prior for A is P(A) = 0.5.
P(A|E) = 0.95
P(A|!E) = 0
P(A) = 0.5
By conservation of expected evidence, P(A|E)P(E) + P(A|!E)P(!E) = P(A) = 0.5
0.95 P(E) = 0.5
P(E) = 0.526
So the experiment is more likely to succeed than fail. Even though A has even odds of being true, you can prove it more than half the time. It sounds like you’re cheating somehow, but the thing to remember is that there are false positives but no false negatives. All you’re doing is proving probably A more than definitely not A, and probably A is more likely.
But P(A|E) = 0.05. That was an assumption here. Had the probability been different, P(E) would have been different.