Hypothesis: There are policies that are good at steering the world according to arbitrary objectives, that have low Kolmogorov complexity.
It is systems that implement these policies efficiently that we should be scared of, as systems that implement policies without low Kolmogorov complexity would be computationally intractable, and therefore we can only end up with systems that are approximating these policies. Therefore these systems would not actually be that good at steering the world according to arbitrary objectives. Shallow pattern recognition objects are of this form.
Systems that don’t manage to implement the policy efficiently would probably mostly not be computationally tractable (every policy can be represented with a lookup table which definitely would be computationally intractable for the real world). Every program that can be practically run that implements the policy would basically be just as dangerous as the shortest program encoding the policy.
Hypothesis: There are policies that are good at steering the world according to arbitrary objectives, that have low Kolmogorov complexity.
It is systems that implement these policies efficiently that we should be scared of, as systems that implement policies without low Kolmogorov complexity would be computationally intractable, and therefore we can only end up with systems that are approximating these policies. Therefore these systems would not actually be that good at steering the world according to arbitrary objectives. Shallow pattern recognition objects are of this form.
Systems that don’t manage to implement the policy efficiently would probably mostly not be computationally tractable (every policy can be represented with a lookup table which definitely would be computationally intractable for the real world). Every program that can be practically run that implements the policy would basically be just as dangerous as the shortest program encoding the policy.