Well, the ideal simplicity prior you should use for Solomonoff computation, is the simplicity prior our own universe was drawn from.
Since we have no idea, at this present time, why the universe is simple to begin with, we have no idea what Solomonoff prior we should be using. We are left with reflective renormalization—learning about things like, “The human prior says that mental properties seem as simply as physical ones, and that math is complicated; but actually it seems better to use a prior that’s simpler than the human-brain-as-interpreter, so that Maxwell’s Equations come out simpler than Thor.” We look for simple explanations of what kinds of “simplicity” our universe prefers; that’s renormalization.
Does the underspecification of the Solomonoff prior bother me? Yes, but it simply manifests the problem of induction in another form—there is no evasion of this issue, anyone who thinks they’re avoiding induction is simply hiding it somewhere else. And the good answer probably depends on answering the wrong question, “Why does anything exist in the first place?” or “Why is our universe simple rather than complicated?” Until then, as said, we’re left with renormalization.
Silas, in this post I’m contrasting ideal but traditional Science—the idealized version of what Max Planck might have believed in, long before Solomonoff induction—with Bayes. (Also, I’ve never communicated with Dawkins.)
RI, be suspicious if you think you understand something most evolutionary biologists don’t know about evolution. (I don’t know about biologists who just sit around looking at cells all day.)
Well, the ideal simplicity prior you should use for Solomonoff computation, is the simplicity prior our own universe was drawn from.
Since we have no idea, at this present time, why the universe is simple to begin with, we have no idea what Solomonoff prior we should be using. We are left with reflective renormalization—learning about things like, “The human prior says that mental properties seem as simply as physical ones, and that math is complicated; but actually it seems better to use a prior that’s simpler than the human-brain-as-interpreter, so that Maxwell’s Equations come out simpler than Thor.” We look for simple explanations of what kinds of “simplicity” our universe prefers; that’s renormalization.
Does the underspecification of the Solomonoff prior bother me? Yes, but it simply manifests the problem of induction in another form—there is no evasion of this issue, anyone who thinks they’re avoiding induction is simply hiding it somewhere else. And the good answer probably depends on answering the wrong question, “Why does anything exist in the first place?” or “Why is our universe simple rather than complicated?” Until then, as said, we’re left with renormalization.
Silas, in this post I’m contrasting ideal but traditional Science—the idealized version of what Max Planck might have believed in, long before Solomonoff induction—with Bayes. (Also, I’ve never communicated with Dawkins.)
RI, be suspicious if you think you understand something most evolutionary biologists don’t know about evolution. (I don’t know about biologists who just sit around looking at cells all day.)