To generalize Peter’s example, a typical deterministic algorithm has low Kolmogorov complexity, and therefore its worst-case input also has low Kolmogorov complexity and therefore a non-negligible probability under complexity-based priors. The only possible solutions to this problem I can see are:
1. add randomization
2. redesign the deterministic algorithm so that it has no worst-case input
3. do a cost-benefit analysis to show that the cost of doing either 1 or 2 is not justified by the expected utility of avoiding the worst-case performance of the original algorithm, then continue to use the original deterministic algorithm
The main argument in favor of 1 is that its cost is typically very low, so why bother with 2 or 3? I think Eliezer’s counterargument is that 1 only works if we assume that in addition to the input string, the algorithm has access to a truly random string with a uniform distribution, but in reality we only have access to one input, i.e., sensory input from the environment, and the so called random bits are just bits from the environment that seem to be random.
My counter-counterargument is to consider randomization as a form of division of labor. We use one very complex and sophisticated algorithm to put a lower bound on the Kolmogorov complexity of a source of randomness in the environment, then after that, this source of randomness can be used by many other simpler algorithms to let them cheaply and dramatically reduce the probability of hitting a worst-case input.
Or to put it another way, before randomization, the environment does not need to be a malicious superintelligence for our algorithms to hit worst-case inputs. After randomization, it does.
Or to put it another way, before randomization, the environment does not need to be a malicious superintelligence for our algorithms to hit worst-case inputs. After randomization, it does.
(I agree that this is one among many valid cases of when we might want to just throw in randomization to save thinking time, as opposed to doing a detailed derandomized analysis.)
To generalize Peter’s example, a typical deterministic algorithm has low Kolmogorov complexity, and therefore its worst-case input also has low Kolmogorov complexity and therefore a non-negligible probability under complexity-based priors. The only possible solutions to this problem I can see are:
1. add randomization
2. redesign the deterministic algorithm so that it has no worst-case input
3. do a cost-benefit analysis to show that the cost of doing either 1 or 2 is not justified by the expected utility of avoiding the worst-case performance of the original algorithm, then continue to use the original deterministic algorithm
The main argument in favor of 1 is that its cost is typically very low, so why bother with 2 or 3? I think Eliezer’s counterargument is that 1 only works if we assume that in addition to the input string, the algorithm has access to a truly random string with a uniform distribution, but in reality we only have access to one input, i.e., sensory input from the environment, and the so called random bits are just bits from the environment that seem to be random.
My counter-counterargument is to consider randomization as a form of division of labor. We use one very complex and sophisticated algorithm to put a lower bound on the Kolmogorov complexity of a source of randomness in the environment, then after that, this source of randomness can be used by many other simpler algorithms to let them cheaply and dramatically reduce the probability of hitting a worst-case input.
Or to put it another way, before randomization, the environment does not need to be a malicious superintelligence for our algorithms to hit worst-case inputs. After randomization, it does.
(I agree that this is one among many valid cases of when we might want to just throw in randomization to save thinking time, as opposed to doing a detailed derandomized analysis.)