I don’t understand why being an embedded agent makes Bayesian reasoning impossible. My intuition is that an hypothesis doesn’t have to be perfectly correlated with reality to be useful. Furthermore suppose you conceived of hypotheses as being a conjunction of elementary hypothesis, then I see no reason why you cannot perform Bayesian reasoning of the form “hypothesis X is one of the consituents of the true hypothesis”, even if the agent can’t perfectly describe the true hypothesis.
Also, “the agent is larger/smaller than the environment” is not very clear, so I think it would help if you would clarify what those terms mean.
I don’t understand why being an embedded agent makes Bayesian reasoning impossible. My intuition is that an hypothesis doesn’t have to be perfectly correlated with reality to be useful. Furthermore suppose you conceived of hypotheses as being a conjunction of elementary hypothesis, then I see no reason why you cannot perform Bayesian reasoning of the form “hypothesis X is one of the consituents of the true hypothesis”, even if the agent can’t perfectly describe the true hypothesis.
Also, “the agent is larger/smaller than the environment” is not very clear, so I think it would help if you would clarify what those terms mean.
The next part just went live, and is about exactly that!: http://intelligence.org/embedded-models