I fixed that the standard deviation of the average of N guesses is proportional to 1/sqrt(N)
in this comment.
I don’t think you should ‘guess’ that the minimum error is proportional 1/N, since its relationship with N is exactly what we need to know. Let’s wait to see if someone knows.
… But do you realize that 1/N decreases faster than 1/sqrt(N), so that your guess would indicate that the average strategy would rarely win?
I don’t think you should ‘guess’ that the minimum error is proportional 1/N, since its relationship with N is exactly what we need to know.
I mistakenly thought I had a qualitative argument that was too simple to bother spelling out. Unfortunately, it was also too simple to be correct. :-) However, see constant’s comment; my intuition appeared to be probably right after all.
… But do you realize that 1/N decreases faster than 1/sqrt(N), so that your guess would indicate that the average strategy would rarely win?
Yes. If my analysis is correct, the method would be very unlikely to win, but it would be much better than using the same method as the other competitors, precisely because 1/N decreases faster than 1/sqrt(N).
As the minimum error approaches zero, the probability that the next guess will reduce the minimum error becomes proportional to the minimum error itself—that’s because it’s like trying to hit a target, and the probability of hitting the target is proportional to the size of the target. This only applies to the error close to zero, because that allows us to treat the probability distribution as essentially flat in that neighborhood, so we don’t have to worry about the shape of the curve.
If the next guess does reduce the minimum error, then, on average, it will reduce the minimum error by half. As above, we’re treating the probability distribution as essentially flat.
So, we expect that after some number n guesses, the minimum error is reduced by half. We expect that after 2n more guesses, the minimum error is reduced again by half. Assuming this is what happens, then we expect that after 4n more guesses, the minimum error is reduced by half again.
The reduction in error that we’re seeing in this imagined playing out is approximately inversely proportional to the number of guesses. The total number of guesses goes from n to n+2n=3n, to n+2n+4n=7n, etc. If we keep going, the total number of guesses becomes 15n, 31n, etc. This approaches a doubling of total guesses. And the error after each approximate doubling is half what it was before.
This is far from a proof. This is crude, fallible reasoning. It’s my best estimate, that’s all.
I fixed that the standard deviation of the average of N guesses is proportional to 1/sqrt(N) in this comment.
I don’t think you should ‘guess’ that the minimum error is proportional 1/N, since its relationship with N is exactly what we need to know. Let’s wait to see if someone knows.
… But do you realize that 1/N decreases faster than 1/sqrt(N), so that your guess would indicate that the average strategy would rarely win?
I mistakenly thought I had a qualitative argument that was too simple to bother spelling out. Unfortunately, it was also too simple to be correct. :-) However, see constant’s comment; my intuition appeared to be probably right after all.
Yes. If my analysis is correct, the method would be very unlikely to win, but it would be much better than using the same method as the other competitors, precisely because 1/N decreases faster than 1/sqrt(N).
As the minimum error approaches zero, the probability that the next guess will reduce the minimum error becomes proportional to the minimum error itself—that’s because it’s like trying to hit a target, and the probability of hitting the target is proportional to the size of the target. This only applies to the error close to zero, because that allows us to treat the probability distribution as essentially flat in that neighborhood, so we don’t have to worry about the shape of the curve.
If the next guess does reduce the minimum error, then, on average, it will reduce the minimum error by half. As above, we’re treating the probability distribution as essentially flat.
So, we expect that after some number n guesses, the minimum error is reduced by half. We expect that after 2n more guesses, the minimum error is reduced again by half. Assuming this is what happens, then we expect that after 4n more guesses, the minimum error is reduced by half again.
The reduction in error that we’re seeing in this imagined playing out is approximately inversely proportional to the number of guesses. The total number of guesses goes from n to n+2n=3n, to n+2n+4n=7n, etc. If we keep going, the total number of guesses becomes 15n, 31n, etc. This approaches a doubling of total guesses. And the error after each approximate doubling is half what it was before.
This is far from a proof. This is crude, fallible reasoning. It’s my best estimate, that’s all.