As a general principle, on any problem for which you know that a particular unrandomized algorithm is unusually stupid—so that a randomized algorithm seems wiser—you should be able to use the same knowledge to produce a superior derandomized algorithm. If nothing else seems obvious, just avoid outputs that look “unrandomized”! If you know something is stupid, deliberately avoid it! (There are exceptions to this rule, but they have to do with defeating cryptographic adversaries—that is, preventing someone else’s intelligence from working on you. Certainly entropy can act as an antidote to intelligence!)
In the case of Robert’s problem, since ROB is specified to have access to your algorithm, it effectively “moves second”, which does indeed place it in a position to “use its intelligence on you”. So it should be unsurprising that a mixed strategy can beat out a pure strategy. Indeed, this is why mixed strategies are sometimes optimal in game-theoretic situations: when adversaries are involved, being unpredictable has its advantages.
Of course, this does presume that you have access to a source of randomness that even ROB cannot predict. If you do not have access to such a source, then you are in fact screwed. But then the problem becomes inherently unfair, and thus of less interest.
(From a certain perspective, even this is compatible with Eliezer’s main message: for an adversary to be looking over your shoulder, “moving second” relative to you, and anti-optimizing your objective function, is indeed a special case of things being “worse than random”. The problem is that producing a “superior derandomized algorithm” in such a case requires inverting the “move order” of yourself and your adversary, which is not possible in many scenarios.)
In such problems, it is usually assumed that your solution have to work (in this case work = better than 50% accuracy) always, even in the worst case, when all unknowns are against you.
I don’t dispute what you say. I just suggest that the confusing term “in the worst case” be replaced by the more accurate phrase “supposing that the environment is an adversarial superintelligence who can perfectly read all of your mind except bits designated ‘random’”.
Note that Eliezer believes the opposite: https://www.lesswrong.com/posts/GYuKqAL95eaWTDje5/worse-than-random.
From near the end of the post you linked:
In the case of Robert’s problem, since ROB is specified to have access to your algorithm, it effectively “moves second”, which does indeed place it in a position to “use its intelligence on you”. So it should be unsurprising that a mixed strategy can beat out a pure strategy. Indeed, this is why mixed strategies are sometimes optimal in game-theoretic situations: when adversaries are involved, being unpredictable has its advantages.
Of course, this does presume that you have access to a source of randomness that even ROB cannot predict. If you do not have access to such a source, then you are in fact screwed. But then the problem becomes inherently unfair, and thus of less interest.
(From a certain perspective, even this is compatible with Eliezer’s main message: for an adversary to be looking over your shoulder, “moving second” relative to you, and anti-optimizing your objective function, is indeed a special case of things being “worse than random”. The problem is that producing a “superior derandomized algorithm” in such a case requires inverting the “move order” of yourself and your adversary, which is not possible in many scenarios.)
The original problem doesn’t say that ROB has access to your algorithm, or that ROB wants you to lose.
In such problems, it is usually assumed that your solution have to work (in this case work = better than 50% accuracy) always, even in the worst case, when all unknowns are against you.
https://www.lesswrong.com/posts/AAqTP6Q5aeWnoAYr4/?commentId=WJ5hegYjp98C4hcRt