I don’t think this question has much intrinsic importance, because almost all realistic learning procedures involve a strong simplicity prior (e.g. weight sharing in neural networks).
Does this mean you do not expect daemons to occur in practice because they are too complicated?
Does this mean you do not expect daemons to occur in practice because they are too complicated?
No, I think a simplicity prior clearly leads to daemons in the limit.