@ Will: Yes, you’re right. You can make a randomized algorithm that has the same worst-case performance as the deterministic one. (It may have slightly impaired average case performance compared to the algorithm you folks had been discussing previously, but that’s a tradeoff one can make.) My only point is that concluding that the randomized one is necessarily better, is far too strong a conclusion (given the evidence that has been presented in this thread so far).
But sure, you are correct that adding a random search is a cheap way to have good confidence that your algorithm isn’t accidentally negatively correlated with the inputs. So if you’re going to reuse the algorithm in a lot of contexts, with lots of different input distributions, then randomization can help you achieve average performance more often than (some kinds of) determinism, which might occasionally have the bad luck to settle into worst-case performance (instead of average) for some of those distributions.
But that’s not the same as saying that it has better worst-case complexity. (It’s actually saying that the randomized one has better average case complexity, for the distributions you’re concerned about.)
@ Will: Yes, you’re right. You can make a randomized algorithm that has the same worst-case performance as the deterministic one. (It may have slightly impaired average case performance compared to the algorithm you folks had been discussing previously, but that’s a tradeoff one can make.) My only point is that concluding that the randomized one is necessarily better, is far too strong a conclusion (given the evidence that has been presented in this thread so far).
But sure, you are correct that adding a random search is a cheap way to have good confidence that your algorithm isn’t accidentally negatively correlated with the inputs. So if you’re going to reuse the algorithm in a lot of contexts, with lots of different input distributions, then randomization can help you achieve average performance more often than (some kinds of) determinism, which might occasionally have the bad luck to settle into worst-case performance (instead of average) for some of those distributions.
But that’s not the same as saying that it has better worst-case complexity. (It’s actually saying that the randomized one has better average case complexity, for the distributions you’re concerned about.)