You can only multiply out P(next result is heads) * ( number of tosses) to get the expected number of heads if you believe those tosses are independent trials. The case of a biased coin toss explicitly violates this assumption.
But the tosses are independent trials, even for the biased coin. I think you mean the P(heads) is not 0.6, it’s either 0.5 or 1, you just don’t know which one it is.
Which means that P(heads on toss after next|heads on next toss) != P(heads on toss after next|tails on next toss). Independence of A and B means that P(A|B) = P(A).
As long as you’re using the same coin, P(heads on toss after next|heads on next toss) == P(heads on toss after next|tails on next toss).
You’re confusing the probability of coin toss outcome with your knowledge about it.
Consider a RNG which generates independent samples from a normal distrubution centered on some—unknown to you—value mu. As you see more samples you get a better idea of what mu is and your expectations about what numbers you are going to see next change. But these samples do not become dependent just because your knowledge of mu changes.
We have a coin that is heads-only with probability 20%, and fair with probability 80%. We’ve already conducted exactly one flip of this coin, which came out heads (causing out update from the prior of 10/80/10 to 20/80/0), but no further flips yet.
For simplicity, event A will be “heads on next toss” (toss number 2), and B will be “heads on toss after next” (toss number 3).
It’s awful that you were downvoted in this thread when you were mostly right and the others were mostly wrong. I’m updating my estimate of LW’s average intelligence downward.
No it doesn’t! A coin biased towards heads can have p(H) = 0.6, p(T) = 0.4, and each flip can be an independent trial. The total results from many flips will then be Poisson distributed.
You can only multiply out P(next result is heads) * ( number of tosses) to get the expected number of heads if you believe those tosses are independent trials. The case of a biased coin toss explicitly violates this assumption.
But the tosses are independent trials, even for the biased coin. I think you mean the P(heads) is not 0.6, it’s either 0.5 or 1, you just don’t know which one it is.
Which means that P(heads on toss after next|heads on next toss) != P(heads on toss after next|tails on next toss). Independence of A and B means that P(A|B) = P(A).
As long as you’re using the same coin, P(heads on toss after next|heads on next toss) == P(heads on toss after next|tails on next toss).
You’re confusing the probability of coin toss outcome with your knowledge about it.
Consider a RNG which generates independent samples from a normal distrubution centered on some—unknown to you—value mu. As you see more samples you get a better idea of what mu is and your expectations about what numbers you are going to see next change. But these samples do not become dependent just because your knowledge of mu changes.
Please actually do your math here.
We have a coin that is heads-only with probability 20%, and fair with probability 80%. We’ve already conducted exactly one flip of this coin, which came out heads (causing out update from the prior of 10/80/10 to 20/80/0), but no further flips yet.
For simplicity, event A will be “heads on next toss” (toss number 2), and B will be “heads on toss after next” (toss number 3).
P(A) = 0.2 1 + 0.8 0.5 = 0.6 P(B) = 0.2 1 + 0.8 0.5 = 0.6
P(A & B) = 0.2 1 1 + 0.8 0.5 0.5 = 0.4
Note that this is not the same as P(A) P(B), which is 0.6 0.6 = 0.36.
The definition of independence is that A and B are independent iff P(A & B) = P(A) * P(B). These events are not independent.
Turning the math crank without understanding what you are doing is worse than useless.
Our issue is about how to understand probability, not which numbers come out of chute.
It’s awful that you were downvoted in this thread when you were mostly right and the others were mostly wrong. I’m updating my estimate of LW’s average intelligence downward.
No it doesn’t! A coin biased towards heads can have p(H) = 0.6, p(T) = 0.4, and each flip can be an independent trial. The total results from many flips will then be Poisson distributed.