Of course the advice is good. In that particular case, Eliezer was exaggerating; his intention was to say that randomness wasn’t necessary in order to achieve the best result, not that it is always possible to achieve a result better than what randomness achieves. Why he didn’t respond, I don’t know. But it is certainly true that the very logical rudeness that Eliezer is talking about is common on Less Wrong as well as elsewhere, and I do find it annoying that practically everyone is talking about it as something that only other people do.
I suppose that must have happened sometime, but next time you find yourself postulating this as part of an explanation, please stop, notice, and feel a little confused.
Actually, that goes for everyone in this thread deconstructing my supposed mistake, based on (a) a misquotation (b) not realizing that every algorithm which can be “improved by randomizing” can in fact be improved further by derandomizing (except in cases where an intelligent opponent can predict a given set of bits if they are produced by a “deterministic” process, but not predict the same bits if they are produced by a “random” process). I sometimes make mistakes, but I also sometimes don’t, and if you can hold both possibilities in mind without it destroying your internal critic, it will probably help in the long run.
Ok, but there some random algorithms which cannot be improved by derandomizing, precisely because the random algorithm does just as well as any deterministic algorithm: for example, if there is some event that has an exact 50% chance of happening, all algorithms, random or not, do equally well at guessing whether it will happen or not.
In other words, such a case doesn’t satisfy the condition that the algorithm can be “improved by randomizing.”
I take it that such an algorithm couldn’t be improved in accuracy, but I expect any randomized algorithm would be more cycle-intensive than a constant rule of “guess that event X will happen”—which will perform just as well.
the claim was that when you can improve an algorithm by randomizing it, you can improve it further by derandomizing it
Unknowns:
randomness wasn’t necessary in order to achieve the best result, not that it is always possible to achieve a result better than what randomness achieves
I think a claim that a randomized algorithm is never better than some deterministic one is interesting and probably true (possibly not, which is why it’s interesting). Is Eliezer really making an even stronger claim than this? Is any precise claim being made? The meaning of “improving an algorithm by adding (or removing) randomization” is pretty vague. Improving in what way? Isn’t any non-randomized algorithm technically a member of “randomized algorithms”? If not, can’t it be made technically randomized at no asymptotic cost?
Of course the advice is good. In that particular case, Eliezer was exaggerating; his intention was to say that randomness wasn’t necessary in order to achieve the best result, not that it is always possible to achieve a result better than what randomness achieves. Why he didn’t respond, I don’t know. But it is certainly true that the very logical rudeness that Eliezer is talking about is common on Less Wrong as well as elsewhere, and I do find it annoying that practically everyone is talking about it as something that only other people do.
I suppose that must have happened sometime, but next time you find yourself postulating this as part of an explanation, please stop, notice, and feel a little confused.
Actually, that goes for everyone in this thread deconstructing my supposed mistake, based on (a) a misquotation (b) not realizing that every algorithm which can be “improved by randomizing” can in fact be improved further by derandomizing (except in cases where an intelligent opponent can predict a given set of bits if they are produced by a “deterministic” process, but not predict the same bits if they are produced by a “random” process). I sometimes make mistakes, but I also sometimes don’t, and if you can hold both possibilities in mind without it destroying your internal critic, it will probably help in the long run.
Ok, but there some random algorithms which cannot be improved by derandomizing, precisely because the random algorithm does just as well as any deterministic algorithm: for example, if there is some event that has an exact 50% chance of happening, all algorithms, random or not, do equally well at guessing whether it will happen or not.
In other words, such a case doesn’t satisfy the condition that the algorithm can be “improved by randomizing.”
I take it that such an algorithm couldn’t be improved in accuracy, but I expect any randomized algorithm would be more cycle-intensive than a constant rule of “guess that event X will happen”—which will perform just as well.
Eliezer:
Unknowns:
I think a claim that a randomized algorithm is never better than some deterministic one is interesting and probably true (possibly not, which is why it’s interesting). Is Eliezer really making an even stronger claim than this? Is any precise claim being made? The meaning of “improving an algorithm by adding (or removing) randomization” is pretty vague. Improving in what way? Isn’t any non-randomized algorithm technically a member of “randomized algorithms”? If not, can’t it be made technically randomized at no asymptotic cost?