I’m uncomfortable with assessing a system by whether it “holds rational beliefs” or “infers Newton’s laws”: these are specific question that system doesn’t need to explicitly answer in order to efficiently optimize. They might be important in a context of specific cognitive architecture, but they are nowhere to be found if cognitive architecture doesn’t hold interface to them as an invariant. If it can just weave Bayesian structure in physical substrate right through to the goal, there need not be any anthropomorphic natural categories along the way.
I’m uncomfortable with assessing a system by whether it “holds rational beliefs” or “infers Newton’s laws”: these are specific question that system doesn’t need to explicitly answer in order to efficiently optimize. They might be important in a context of specific cognitive architecture, but they are nowhere to be found if cognitive architecture doesn’t hold interface to them as an invariant. If it can just weave Bayesian structure in physical substrate right through to the goal, there need not be any anthropomorphic natural categories along the way.