which, using the log(1+x) = x approximation, is about
(1/3) ([bound—bad1()] / [next—this] ).
Then Scott’s comment gives the rest. I was worried about the fact that we seem to be taking the exponential of the error in our approximation, or something. But Scott points out that this is not an issue because we can make [next-this] as big as we want, if necessary, without increasing bad1() at all, by guessing p1 for a very long time until [bound—bad1()] / [next—this]] is close enough to zero that the error is too small to matter.
In case anyone shared my confusion:
The while loop where we ensure that eps is small enough so that
bound > bad1() + (next—this) * log((1 - p1) / (1 - p1 - eps))
is technically necessary to ensure that bad1() doesn’t surpass bound, but it is immaterial in the limit. Solving
bound = bad1() + (next—this) * log((1 - p1) / (1 - p1 - eps))
gives
eps >= (1/3) (1 - e^{ -[bound—bad1()] / [next—this]] })
which, using the log(1+x) = x approximation, is about
(1/3) ([bound—bad1()] / [next—this] ).
Then Scott’s comment gives the rest. I was worried about the fact that we seem to be taking the exponential of the error in our approximation, or something. But Scott points out that this is not an issue because we can make [next-this] as big as we want, if necessary, without increasing bad1() at all, by guessing p1 for a very long time until [bound—bad1()] / [next—this]] is close enough to zero that the error is too small to matter.