Silas: Look, as soon you find a 1 bit, you’ve solved the problem with certainty. You have no remaining doubt about the answer. And the expected number of queries until you find a 1 bit is O(1) (or 2 to be exact). Why? Because with each query, your probability of finding a 1 bit is 1⁄2. Therefore, the probability that you need t or more queries is (1/2)^(t-1). So you get a geometric series that sums to 2.
(Note: I was careful to say you succeed with certainty after an expected number of queries independent of n—not that there’s a constant c such that you succeed with certainty after c queries, which would be a different claim!)
And yes, I absolutely assume that the adversary knows the algorithm, because your algorithm is a fixed piece of information! Once again, the point is not that the universe is out to get us—it’s that we’d like to succeed even if it is out to get us. In particular, we don’t want to assume that we know the probability distribution over inputs, and whether it might happen to be especially bad for our algorithm. Randomness is useful precisely for “smearing out” over the set of possible environments, when you’re completely ignorant about the environment.
If you’re not to think this way, the price you pay for Bayesian purity is to give up whole theory of randomized algorithms, with all the advances it’s led to even in deterministic algorithms; and to lack the conceptual tools even to ask basic questions like P versus BPP.
It feels strange to be explaining CS101 as if I were defending some embattled ideological claim—but it’s good to be reminded why it took people a long time to get these things right.
Will: Yeah, it’s 1⁄4, thanks. I somehow have a blind spot when it comes to constants. ;-)