Cognitive dissonance. Two probability distributions (generative models, in Active Inference parlance) not cohering, i.e., not combinable into a single probability distribution. See a concrete example in this comment:
I hope from the exposition above it should be clear that you couldn’t quite factor Active Inference into a subsystem of the brain/mind (unless under “multiple Active Inference models with context switches” model of the mind, but, as I noted above, I thing this would be a rather iffy model to begin with). I would rather say: Active Inference still as a “framework” model with certain “extra Act Inf” pieces (such as access consciousness and memory) “attached” to it, plus other models (distributed control, and maybe there are some other models, I didn’t think about it deeply) that don’t quite cohere with Active Inference altogether and thus we сan only resort to modelling the brain/mind “as either one or another”, getting predictions, and comparing them.
Here, predicting a system (e.g., a brain) in terms of distributed control theory, and in terms of Active Inference, would lead to incoherent inferences (i.e., predictions, or probability distributions about the future states of the system). And choosing which prediction to take would require extra contextual information (hence, intrinsic contextuality).
Contextuality is closely related to boundedness of cognition in physical agents (boundedness of representable models, memory, time and energy resources dedicated to inference, etc.) Without these limitations, you can perform Solomonoff induction and you are fine, enter AIXI. The problem is that Solomonoff induction is incomputable.
Cognitive dissonance. Two probability distributions (generative models, in Active Inference parlance) not cohering, i.e., not combinable into a single probability distribution. See a concrete example in this comment:
Here, predicting a system (e.g., a brain) in terms of distributed control theory, and in terms of Active Inference, would lead to incoherent inferences (i.e., predictions, or probability distributions about the future states of the system). And choosing which prediction to take would require extra contextual information (hence, intrinsic contextuality).
Contextuality is closely related to boundedness of cognition in physical agents (boundedness of representable models, memory, time and energy resources dedicated to inference, etc.) Without these limitations, you can perform Solomonoff induction and you are fine, enter AIXI. The problem is that Solomonoff induction is incomputable.