This is what I meant by something being a proven truth- within the rules set one can find outcomes which are axiomatically impossible or necessary. The process itself may be random, but calling it random when something impossible didn’t happen seems odd to me. The very idea that 1 may be not-quite-certain is more than a little baffling, and I suspect is the heart of the issue.
The very idea that 1 may be not-quite-certain is more than a little baffling, and I suspect is the heart of the issue.
If 1 isn’t quite certain then neither is 0 (if something happens with probability 1, then the probability of it not happening is 0). It’s one of those things that pops up when dealing with infinity.
It’s best illustrated with an example. Let’s say we play a game where we flip a coin and I pay you $1 if it’s heads and you pay me $1 if it’s tails. With probability 1, one of us will eventually go broke (see Gambler’s ruin). It’s easy think of a sequence of coin flips where this never happens; for example, if heads and tails alternated. The theory holds that such a sequence occurs with probability 0. Yet this does not make it impossible.
It can be thought of as the result of a limiting process. If I looked at sequences of N of coin flips, counted the ones where no one went broke and divided this by the total number of possible sequences, then as I let N go to infinity this ratio would go to zero. This event occupies an region with area 0 in the sample space.
This is what I meant by something being a proven truth- within the rules set one can find outcomes which are axiomatically impossible or necessary. The process itself may be random, but calling it random when something impossible didn’t happen seems odd to me. The very idea that 1 may be not-quite-certain is more than a little baffling, and I suspect is the heart of the issue.
If 1 isn’t quite certain then neither is 0 (if something happens with probability 1, then the probability of it not happening is 0). It’s one of those things that pops up when dealing with infinity.
It’s best illustrated with an example. Let’s say we play a game where we flip a coin and I pay you $1 if it’s heads and you pay me $1 if it’s tails. With probability 1, one of us will eventually go broke (see Gambler’s ruin). It’s easy think of a sequence of coin flips where this never happens; for example, if heads and tails alternated. The theory holds that such a sequence occurs with probability 0. Yet this does not make it impossible.
It can be thought of as the result of a limiting process. If I looked at sequences of N of coin flips, counted the ones where no one went broke and divided this by the total number of possible sequences, then as I let N go to infinity this ratio would go to zero. This event occupies an region with area 0 in the sample space.
If the limit converges then it can hit 0 or 1. Got it. Thank you.