Possible, I’m not arguing that a utility maximizing agent would be simpler,
Good. ;-)
Only that an agents whose preferences are encoded in a utility function (even a “simple” one like “number of paperclips in existence”) could be indecisive.
Sure. But at that point, the “simplicity” of using utility functions disappears in a puff of smoke, as you need to design a metacognitive architecture to go with it.
One of the really elegant things about the way brains actually work, is that the metacognition is “all the way down”, and I’m rather fond of such architectures. (My predicate dispatcher, for instance, uses rules to understand rules, in the same sort of Escherian level-crossing bootstrap.)
Good. ;-)
Sure. But at that point, the “simplicity” of using utility functions disappears in a puff of smoke, as you need to design a metacognitive architecture to go with it.
One of the really elegant things about the way brains actually work, is that the metacognition is “all the way down”, and I’m rather fond of such architectures. (My predicate dispatcher, for instance, uses rules to understand rules, in the same sort of Escherian level-crossing bootstrap.)