Silas: Look, as soon you find a 1 bit, you’ve solved the problem with certainty. You have no remaining doubt about the answer. And the expected number of queries until you find a 1 bit is O(1) (or 2 to be exact). Why? Because with each query, your probability of finding a 1 bit is 1⁄2. Therefore, the probability that you need t or more queries is (1/2)^(t-1). So you get a geometric series that sums to 2.
(Note: I was careful to say you succeed with certainty after an expected number of queries independent of n—not that there’s a constant c such that you succeed with certainty after c queries, which would be a different claim!)
And yes, I absolutely assume that the adversary knows the algorithm, because your algorithm is a fixed piece of information! Once again, the point is not that the universe is out to get us—it’s that we’d like to succeed even if it is out to get us. In particular, we don’t want to assume that we know the probability distribution over inputs, and whether it might happen to be especially bad for our algorithm. Randomness is useful precisely for “smearing out” over the set of possible environments, when you’re completely ignorant about the environment.
If you’re not to think this way, the price you pay for Bayesian purity is to give up whole theory of randomized algorithms, with all the advances it’s led to even in deterministic algorithms; and to lack the conceptual tools even to ask basic questions like P versus BPP.
It feels strange to be explaining CS101 as if I were defending some embattled ideological claim—but it’s good to be reminded why it took people a long time to get these things right.
Quick check: take algorithm that tests bits 1, n/2+1, 3, n/2+3, 5, n/2+5 … then continues alternating halves on even indices. Of course, it stops at the first 1. And it’s obviously deterministic.
But unless I’m mistaken, averaged over all possible instances of the problem, this algorithm has exactly the same average, best case and worst case as the nondeterministic one. (In fact, I think this holds for any fixed sequence of test indices, as long as it alternates between halves, such as 1, n, 2, n-1, 3, n-2… Basically, my reasoning is that an arbitrary index sequence should have (on average) the same probability of finding the answer in k tests on a random problem that a random index sequence does.)
I know you said it’s adversarial, and it’s clear that (assuming you know the sequence of tested indices) you can construct problem instances that take n/2+1 (worst-case) tests, in which case the problem isn’t random. I just want to test if I reasoned correctly.
If I’m right, then I think what you’re saying is exactly what Eliezer said: If you know the adversary can predict you, you’ve got no better option than to act randomly. (You just don’t do as bad in this case as in Eliezer’s example.) Conversely, if you can predict the adversary, then the random algorithm completely wastes this knowledge. (I.e., if you could guess more about the sequence, you could probably do better than n/4+1 worst case, and if you can predict the adversary then you don’t even need to test the bits, which is basically O(0).)
Silas: Look, as soon you find a 1 bit, you’ve solved the problem with certainty. You have no remaining doubt about the answer. And the expected number of queries until you find a 1 bit is O(1) (or 2 to be exact). Why? Because with each query, your probability of finding a 1 bit is 1⁄2. Therefore, the probability that you need t or more queries is (1/2)^(t-1). So you get a geometric series that sums to 2.
(Note: I was careful to say you succeed with certainty after an expected number of queries independent of n—not that there’s a constant c such that you succeed with certainty after c queries, which would be a different claim!)
And yes, I absolutely assume that the adversary knows the algorithm, because your algorithm is a fixed piece of information! Once again, the point is not that the universe is out to get us—it’s that we’d like to succeed even if it is out to get us. In particular, we don’t want to assume that we know the probability distribution over inputs, and whether it might happen to be especially bad for our algorithm. Randomness is useful precisely for “smearing out” over the set of possible environments, when you’re completely ignorant about the environment.
If you’re not to think this way, the price you pay for Bayesian purity is to give up whole theory of randomized algorithms, with all the advances it’s led to even in deterministic algorithms; and to lack the conceptual tools even to ask basic questions like P versus BPP.
It feels strange to be explaining CS101 as if I were defending some embattled ideological claim—but it’s good to be reminded why it took people a long time to get these things right.
Quick check: take algorithm that tests bits 1, n/2+1, 3, n/2+3, 5, n/2+5 … then continues alternating halves on even indices. Of course, it stops at the first 1. And it’s obviously deterministic.
But unless I’m mistaken, averaged over all possible instances of the problem, this algorithm has exactly the same average, best case and worst case as the nondeterministic one. (In fact, I think this holds for any fixed sequence of test indices, as long as it alternates between halves, such as 1, n, 2, n-1, 3, n-2… Basically, my reasoning is that an arbitrary index sequence should have (on average) the same probability of finding the answer in k tests on a random problem that a random index sequence does.)
I know you said it’s adversarial, and it’s clear that (assuming you know the sequence of tested indices) you can construct problem instances that take n/2+1 (worst-case) tests, in which case the problem isn’t random. I just want to test if I reasoned correctly.
If I’m right, then I think what you’re saying is exactly what Eliezer said: If you know the adversary can predict you, you’ve got no better option than to act randomly. (You just don’t do as bad in this case as in Eliezer’s example.) Conversely, if you can predict the adversary, then the random algorithm completely wastes this knowledge. (I.e., if you could guess more about the sequence, you could probably do better than n/4+1 worst case, and if you can predict the adversary then you don’t even need to test the bits, which is basically O(0).)