There’s nothing serial about utility maximisation!
I didn’t say there was. I said that humans needed to switch to slow serial
processing in order to do it, because our brains aren’t set up to do it in parallel.
I think this indicates something about where the problem lies. You are apparently imagining an agent consciously calculating utilities. That idea has nothing to do with the idea that utility framework proponents are talking about.
When humans don’t consciously calculate, the actions they take are much harder to fit into a utility-maximizing framework, what with inconsistencies cropping up everywhere.
I think this indicates something about where the problem lies. You are apparently imagining an agent consciously calculating utilities. That idea has nothing to do with the idea that utility framework proponents are talking about.
No, I said that’s what a human would have to do in order to actually calculate utilities, since we don’t have utility-calculating hardware.
Ah—OK, then.
When humans don’t consciously calculate, the actions they take are much harder to fit into a utility-maximizing framework, what with inconsistencies cropping up everywhere.
It depends on the utility-maximizing framework you are talking about—some are more general than others—and some are really very general.