Measuring optimization power requires a prior over environments. Anti-inductive minds optimize effectively in anti-inductive worlds.
(Yes, this partially contradicts my previous comment. And yes, the idea of a world or a proper probability distribution that’s anti-inductive in the long run doesn’t make sense as far as I can tell; but you can still define a prior/measure that orders any finite set of hypotheses/worlds however you like.)
Measuring optimization power requires a prior over environments. Anti-inductive minds optimize effectively in anti-inductive worlds.
(Yes, this partially contradicts my previous comment. And yes, the idea of a world or a proper probability distribution that’s anti-inductive in the long run doesn’t make sense as far as I can tell; but you can still define a prior/measure that orders any finite set of hypotheses/worlds however you like.)