messy evolved animal brains don’t track probability and utility separately the way a cleanly-designed AI could.
Side-note: a cleanly designed AI could do this, but it isn’t obvious to me that this is actually the optimal design choice. Insofar as the agent is ultimately optimizing for utility, you might want epistemology to be shaped according considerations of valence (relevance to goals) up and down the stack. You pay attention to, and form concepts about, things in proportion to their utility-relevance.
Side-note: a cleanly designed AI could do this, but it isn’t obvious to me that this is actually the optimal design choice. Insofar as the agent is ultimately optimizing for utility, you might want epistemology to be shaped according considerations of valence (relevance to goals) up and down the stack. You pay attention to, and form concepts about, things in proportion to their utility-relevance.