If by “ought” claims you mean things we assign truth values that aren’t derivable from is-statements, then I agree that humans require such beliefs to function. Maybe we could describe choice of a universal Turing machine as such a belief for a Solomonoff inductor.
If by “ought” statements you mean the universally compelling truths of moral realism, then no, it seems straightforward to produce counterexample thinkers that would not be be compelled. As far as I can tell, the things you’re talking about don’t even set a specific course of action for the thing believing them, they have no necessary function beyond the epistemic.
There’s a third way of thinking where norms are just rules for achieving a certain kind of result optimally or at least reliably.
I think there’s some dangerous reasoning here around the idea of “why.” If I believe that a plate is on the table, I don’t need to know anything at all about my visual cortex to believe that. The explanation is not a part of the belief, nor is it inseparably attached, nor is it necessary for having the belief, it’s a human thing that we call an explanation in light of fulfilling a human desire for a story about what is being explained.
Nonetheless, your visual cortex must do certain things reliably for you to be able to perceive,
There’s a third way of thinking where norms are just rules for achieving a certain kind of result optimally or at least reliably.
Nonetheless, your visual cortex must do certain things reliably for you to be able to perceive,