I think that this is a better de-randomization: Presumably there is some set S of numbers that have been observed to pass Diehard tests reliably. The de-randomized algorithm is simply to read off S. Computationally, this is cheaper than generating your own random set and reading it off.
ETA: And, if I’m not mistaken, if S has already been observed to pass Diehard tests reliably, then it is even more likely to pass them in the future than is a newly generated random set.
The uncertainty principle would appear to represent a problem for the thesis. Secure hardware random number generators are “really difficult” to predict. That appears to be a case where you can’t—in practice—beat a random strategy—whereas a random strategy beats all kinds of stupid strategies.
The “any algorithm that can be improved by randomization” is redundant—and does no useful conceptual work. Idiotic algorithms that can be improved by adding randomness are two-a-penny.
The “can always be improved further by derandomization” is not true—in practice. Sometimes, you just can’t find a way to do any better than the random algorithm.
I think the correct principle in this area is that a deterministic algorithm can always do at least as well as a random one—provided its workings can be kept secret.
The main problem case for this principle occurs when you face an opponent with a lot of resources and cryptographic skills—but with sufficient care you should be able to keep them busy until the universal heat death.
That’s an “improvement” that’s unlikely to happen before universal heat death.
Improving an algorithm by adding randomness is simple in many cases—while improving it further may be totally impractical and ineffectual.
I think that this is a better de-randomization: Presumably there is some set S of numbers that have been observed to pass Diehard tests reliably. The de-randomized algorithm is simply to read off S. Computationally, this is cheaper than generating your own random set and reading it off.
ETA: And, if I’m not mistaken, if S has already been observed to pass Diehard tests reliably, then it is even more likely to pass them in the future than is a newly generated random set.
Certainly. That, however, is a mere matter of limited computing power, not of theory.
The uncertainty principle would appear to represent a problem for the thesis. Secure hardware random number generators are “really difficult” to predict. That appears to be a case where you can’t—in practice—beat a random strategy—whereas a random strategy beats all kinds of stupid strategies.
The “any algorithm that can be improved by randomization” is redundant—and does no useful conceptual work. Idiotic algorithms that can be improved by adding randomness are two-a-penny.
The “can always be improved further by derandomization” is not true—in practice. Sometimes, you just can’t find a way to do any better than the random algorithm.
I think the correct principle in this area is that a deterministic algorithm can always do at least as well as a random one—provided its workings can be kept secret.
The main problem case for this principle occurs when you face an opponent with a lot of resources and cryptographic skills—but with sufficient care you should be able to keep them busy until the universal heat death.