As the minimum error approaches zero, the probability that the next guess will reduce the minimum error becomes proportional to the minimum error itself—that’s because it’s like trying to hit a target, and the probability of hitting the target is proportional to the size of the target. This only applies to the error close to zero, because that allows us to treat the probability distribution as essentially flat in that neighborhood, so we don’t have to worry about the shape of the curve.
If the next guess does reduce the minimum error, then, on average, it will reduce the minimum error by half. As above, we’re treating the probability distribution as essentially flat.
So, we expect that after some number n guesses, the minimum error is reduced by half. We expect that after 2n more guesses, the minimum error is reduced again by half. Assuming this is what happens, then we expect that after 4n more guesses, the minimum error is reduced by half again.
The reduction in error that we’re seeing in this imagined playing out is approximately inversely proportional to the number of guesses. The total number of guesses goes from n to n+2n=3n, to n+2n+4n=7n, etc. If we keep going, the total number of guesses becomes 15n, 31n, etc. This approaches a doubling of total guesses. And the error after each approximate doubling is half what it was before.
This is far from a proof. This is crude, fallible reasoning. It’s my best estimate, that’s all.
As the minimum error approaches zero, the probability that the next guess will reduce the minimum error becomes proportional to the minimum error itself—that’s because it’s like trying to hit a target, and the probability of hitting the target is proportional to the size of the target. This only applies to the error close to zero, because that allows us to treat the probability distribution as essentially flat in that neighborhood, so we don’t have to worry about the shape of the curve.
If the next guess does reduce the minimum error, then, on average, it will reduce the minimum error by half. As above, we’re treating the probability distribution as essentially flat.
So, we expect that after some number n guesses, the minimum error is reduced by half. We expect that after 2n more guesses, the minimum error is reduced again by half. Assuming this is what happens, then we expect that after 4n more guesses, the minimum error is reduced by half again.
The reduction in error that we’re seeing in this imagined playing out is approximately inversely proportional to the number of guesses. The total number of guesses goes from n to n+2n=3n, to n+2n+4n=7n, etc. If we keep going, the total number of guesses becomes 15n, 31n, etc. This approaches a doubling of total guesses. And the error after each approximate doubling is half what it was before.
This is far from a proof. This is crude, fallible reasoning. It’s my best estimate, that’s all.