Actually, with the assumptions you gave, as far as I can tell from memory and a quick look at wikipedia, the standard deviation of the average of N guesses is proportional to sqrt(N) so, whatever the average error of one guess is, the average error of the average of 16 guesses is 4 times less, making it a significantly better guess. This quantity is known as the standard error of the mean.
Great, I feel like we’re making good progress. (wisdom of the crowd..)
the standard deviation of the average of N guesses is proportional to 1/sqrt(N)
Yes, for example from here, if the standard deviation of the individual guesses is s, the standard deviation of the average of N guesses will be s / sqrt(N).
… And this represents the typical error of seven_and_sixes strategy, within one standard deviation.
Now—to see if the strategy typically wins—we just need the number for: given N guesses, what is the expected minimum error of the N guesses? (That is, the average minimum of the set of differences between each guess and the mean?)
I would guess that this is proportional 1/N, whereas the average method gives 1/sqrt(N), so the probability that you win is O(1/sqrt(N)). In reality it would be worse, since, if you were not the last to go, you would only have the average of M guesses, with M < N. However, the average person has a probability of winning of 1/(N+1) so your probability of winning is sqrt(N) times better than the average person’s, unless you can do even better with some other skill that you have. This analysis is complicated by exceptionally bad guessers and other average-takers, but not significantly. I accept your downvote because sixes was still lucky to win and did not acknowledge this.
Oh wait. I think we’ve got some errors to fix. I won’t have time immediately, but I’ll edit my comment to reflect any changes you make. I knew what you meant, or you knew what I meant, but now it’s confusing..
I’m not sure what you mean. Looking back I did make some errors in my analysis, but I’m not particularly motivated to correct them, since I doubt my conclusion will qualitatively change, though if I got the right probability of winning, it would be a coincidence. Maybe I’ll feel more curious about the right answer another time.
I fixed that the standard deviation of the average of N guesses is proportional to 1/sqrt(N)
in this comment.
I don’t think you should ‘guess’ that the minimum error is proportional 1/N, since its relationship with N is exactly what we need to know. Let’s wait to see if someone knows.
… But do you realize that 1/N decreases faster than 1/sqrt(N), so that your guess would indicate that the average strategy would rarely win?
I don’t think you should ‘guess’ that the minimum error is proportional 1/N, since its relationship with N is exactly what we need to know.
I mistakenly thought I had a qualitative argument that was too simple to bother spelling out. Unfortunately, it was also too simple to be correct. :-) However, see constant’s comment; my intuition appeared to be probably right after all.
… But do you realize that 1/N decreases faster than 1/sqrt(N), so that your guess would indicate that the average strategy would rarely win?
Yes. If my analysis is correct, the method would be very unlikely to win, but it would be much better than using the same method as the other competitors, precisely because 1/N decreases faster than 1/sqrt(N).
As the minimum error approaches zero, the probability that the next guess will reduce the minimum error becomes proportional to the minimum error itself—that’s because it’s like trying to hit a target, and the probability of hitting the target is proportional to the size of the target. This only applies to the error close to zero, because that allows us to treat the probability distribution as essentially flat in that neighborhood, so we don’t have to worry about the shape of the curve.
If the next guess does reduce the minimum error, then, on average, it will reduce the minimum error by half. As above, we’re treating the probability distribution as essentially flat.
So, we expect that after some number n guesses, the minimum error is reduced by half. We expect that after 2n more guesses, the minimum error is reduced again by half. Assuming this is what happens, then we expect that after 4n more guesses, the minimum error is reduced by half again.
The reduction in error that we’re seeing in this imagined playing out is approximately inversely proportional to the number of guesses. The total number of guesses goes from n to n+2n=3n, to n+2n+4n=7n, etc. If we keep going, the total number of guesses becomes 15n, 31n, etc. This approaches a doubling of total guesses. And the error after each approximate doubling is half what it was before.
This is far from a proof. This is crude, fallible reasoning. It’s my best estimate, that’s all.
Actually, with the assumptions you gave, as far as I can tell from memory and a quick look at wikipedia, the standard deviation of the average of N guesses is proportional to sqrt(N) so, whatever the average error of one guess is, the average error of the average of 16 guesses is 4 times less, making it a significantly better guess. This quantity is known as the standard error of the mean.
Great, I feel like we’re making good progress. (wisdom of the crowd..)
Yes, for example from here, if the standard deviation of the individual guesses is s, the standard deviation of the average of N guesses will be s / sqrt(N).
… And this represents the typical error of seven_and_sixes strategy, within one standard deviation.
Now—to see if the strategy typically wins—we just need the number for: given N guesses, what is the expected minimum error of the N guesses? (That is, the average minimum of the set of differences between each guess and the mean?)
I would guess that this is proportional 1/N, whereas the average method gives 1/sqrt(N), so the probability that you win is O(1/sqrt(N)). In reality it would be worse, since, if you were not the last to go, you would only have the average of M guesses, with M < N. However, the average person has a probability of winning of 1/(N+1) so your probability of winning is sqrt(N) times better than the average person’s, unless you can do even better with some other skill that you have. This analysis is complicated by exceptionally bad guessers and other average-takers, but not significantly. I accept your downvote because sixes was still lucky to win and did not acknowledge this.
Oh wait. I think we’ve got some errors to fix. I won’t have time immediately, but I’ll edit my comment to reflect any changes you make. I knew what you meant, or you knew what I meant, but now it’s confusing..
I’m not sure what you mean. Looking back I did make some errors in my analysis, but I’m not particularly motivated to correct them, since I doubt my conclusion will qualitatively change, though if I got the right probability of winning, it would be a coincidence. Maybe I’ll feel more curious about the right answer another time.
I fixed that the standard deviation of the average of N guesses is proportional to 1/sqrt(N) in this comment.
I don’t think you should ‘guess’ that the minimum error is proportional 1/N, since its relationship with N is exactly what we need to know. Let’s wait to see if someone knows.
… But do you realize that 1/N decreases faster than 1/sqrt(N), so that your guess would indicate that the average strategy would rarely win?
I mistakenly thought I had a qualitative argument that was too simple to bother spelling out. Unfortunately, it was also too simple to be correct. :-) However, see constant’s comment; my intuition appeared to be probably right after all.
Yes. If my analysis is correct, the method would be very unlikely to win, but it would be much better than using the same method as the other competitors, precisely because 1/N decreases faster than 1/sqrt(N).
As the minimum error approaches zero, the probability that the next guess will reduce the minimum error becomes proportional to the minimum error itself—that’s because it’s like trying to hit a target, and the probability of hitting the target is proportional to the size of the target. This only applies to the error close to zero, because that allows us to treat the probability distribution as essentially flat in that neighborhood, so we don’t have to worry about the shape of the curve.
If the next guess does reduce the minimum error, then, on average, it will reduce the minimum error by half. As above, we’re treating the probability distribution as essentially flat.
So, we expect that after some number n guesses, the minimum error is reduced by half. We expect that after 2n more guesses, the minimum error is reduced again by half. Assuming this is what happens, then we expect that after 4n more guesses, the minimum error is reduced by half again.
The reduction in error that we’re seeing in this imagined playing out is approximately inversely proportional to the number of guesses. The total number of guesses goes from n to n+2n=3n, to n+2n+4n=7n, etc. If we keep going, the total number of guesses becomes 15n, 31n, etc. This approaches a doubling of total guesses. And the error after each approximate doubling is half what it was before.
This is far from a proof. This is crude, fallible reasoning. It’s my best estimate, that’s all.