Suppose you want to make a binary decision with a specified bias $p$. If, say, $p=1/8$ then you can throw a coin 3 times, and if you got, say, $HHH$, you take it as positive, else negative.
But if $p$ is a big number (say $1/1000$), or a weird number, say $1/\pi$, then this method fails. There is another really beautiful method I learned some time ago, which allows you to get any biased coin in a constant =2 expected number of throws! (I lost the source, unfortunately)
It works as follows: you throw the coin until the first time you get a head—assume this happened on your $n$-th throw. Then, you accept if and only if the $n$-th digit in the binary expansion of $p$ is 1. It is easy to show that this comes out to the bias exactly = p, and the expected number of coin throws is always 2.
Also, here’s three quick examples for anyone still wondering exactly how this works. Remember that the chance of flipping tails until a given digit in the binary expansion, then flipping heads, is (12)n where n is the digit number (1/2 for the first digit after the decimal, 1⁄4 for the second, etc).
P=14
P2=.01
My chance to land on the 1 with a heads is exactly 14.
P=23
P2=.101010...
My chance to land on a 1 with my first heads is 12+18+132+...=23
P=17
P2=.001001...
My chance to land on a 1 with my first heads is 18+164+...
The only semi-tough part is doing the base 2 long division to get from your fraction to a binary decimal, but you can just use an online calculator for that. The coolest part is that your expected number of flips is 2, because you stop after one heads.
Suppose you want to make a binary decision with a specified bias $p$. If, say, $p=1/8$ then you can throw a coin 3 times, and if you got, say, $HHH$, you take it as positive, else negative.
But if $p$ is a big number (say $1/1000$), or a weird number, say $1/\pi$, then this method fails. There is another really beautiful method I learned some time ago, which allows you to get any biased coin in a constant =2 expected number of throws! (I lost the source, unfortunately)
It works as follows: you throw the coin until the first time you get a head—assume this happened on your $n$-th throw. Then, you accept if and only if the $n$-th digit in the binary expansion of $p$ is 1. It is easy to show that this comes out to the bias exactly = p, and the expected number of coin throws is always 2.
Your Latex didn’t quite work.
Also, here’s three quick examples for anyone still wondering exactly how this works. Remember that the chance of flipping tails until a given digit in the binary expansion, then flipping heads, is (12)n where n is the digit number (1/2 for the first digit after the decimal, 1⁄4 for the second, etc).
P=14
P2=.01
My chance to land on the 1 with a heads is exactly 14.
P=23
P2=.101010...
My chance to land on a 1 with my first heads is 12+18+132+...=23
P=17
P2=.001001...
My chance to land on a 1 with my first heads is 18+164+...
The only semi-tough part is doing the base 2 long division to get from your fraction to a binary decimal, but you can just use an online calculator for that. The coolest part is that your expected number of flips is 2, because you stop after one heads.