Humans form approximately accurate models of how new drugs, food, injuries, etc. will affect their minds, and respond accordingly.
If you coerce AIXI with sufficiently tricky rewards (and nothing else is our elvolved body doing with our developing brain) ro form ‘approximately accurate models’ AIXI will also respond accordingly. Except
They don’t always do so
When it doesn’t do so either because it has learned that it can get around this coercing. Same with humans which may also may come to think that they can get around their body and go to heaven, take drug...
If human adults didn’t grasp death any better than AIXI does, they’d routinely drop anvils on their heads literally, not ‘so to speak’.
AIXI wouldn’t either if you coerced it like our body (and society) does us.
This is for the same reason AIXI does. Symbolic reasoning about reality.
What do you mean? What would be the alternative to ‘symbolic reasoning’?
I don’t say that there is an alternative. It means that symbolic reasoning needs some base. Axioms, goals states. Where do you get these from? In the human brain these form stabilizing neural nets thus representing approximations of vague interrelated representations of reality. But you have no cognitive access to this fuzzy-to-symbolic-relation, only to its mentalese correlate—the symbols you reason with. Whatever you derive from the symbols is in the same way separate from reality as the cartesian barrier of AIXI.
I disagree.
If you coerce AIXI with sufficiently tricky rewards (and nothing else is our elvolved body doing with our developing brain) ro form ‘approximately accurate models’ AIXI will also respond accordingly. Except
When it doesn’t do so either because it has learned that it can get around this coercing. Same with humans which may also may come to think that they can get around their body and go to heaven, take drug...
AIXI wouldn’t either if you coerced it like our body (and society) does us.
I don’t say that there is an alternative. It means that symbolic reasoning needs some base. Axioms, goals states. Where do you get these from? In the human brain these form stabilizing neural nets thus representing approximations of vague interrelated representations of reality. But you have no cognitive access to this fuzzy-to-symbolic-relation, only to its mentalese correlate—the symbols you reason with. Whatever you derive from the symbols is in the same way separate from reality as the cartesian barrier of AIXI.
Added: See http://lesswrong.com/lw/ii5/baseline_of_my_opinion_on_lw_topics/