That’s a good one, for some priors about N. Any definition of “better” or “optimal” will be with respect to a prior about N. That prior can be unbounded and still produce good algorithms.
The obvious thing to tweak about your program is the “double N” constant; you could instead go to 3*N or something.
If you know your prior in detail, you’ll want to make your next guess to split the remaining probability weight evenly, not just the numeric search space. (Similar in concept to arithmetic coding.) With a poor understanding of the prior, though, this is a very minor detail.
By the way, one way of doing this is:
Make your best guess for N and check it.
If the test comes out “fake”, perform a binary search from 1 (or the highest known “true” value) to N
If the test comes out “true”, double N and goto 1.
I don’t think there’s an optimal search algorithm for an infinite list of unknown distribution. Anyone know better?
That’s a good one, for some priors about N. Any definition of “better” or “optimal” will be with respect to a prior about N. That prior can be unbounded and still produce good algorithms.
The obvious thing to tweak about your program is the “double N” constant; you could instead go to 3*N or something.
If you know your prior in detail, you’ll want to make your next guess to split the remaining probability weight evenly, not just the numeric search space. (Similar in concept to arithmetic coding.) With a poor understanding of the prior, though, this is a very minor detail.