How do you represent that uncertainty in a number, or a sorted list of numbers representing the utility of various choices?
The number could be the standard deviation of the probability distribution for the utility (the mean being the expected utility, which you would use for sorting purposes).
So if you (“you” being the linear-utility-maximizing agent) have two path of actions whose expected utility are close, but with a lot of uncertainty, it could be worth collecting more information to try to narrow down your probability distributions.\
It seems that an utility-maximizing agent could be in a state that could be qualified as “indecisive”.
It seems that an utility-maximizing agent could be in a state that could be qualified as “indecisive”.
But only if you add new entities to the model, thereby complicating it. You now need a separate meta-cognitive system to manage this uncertainty. And what if those options are uncertain? Now you need another meta-cognitive system!
Human brains, OTOH, represent all this stuff in a single layer. We can consider actions, meta-actions, and meta-meta-actions in the same process without skipping a beat.
But only if you add new entities to the model, thereby complicating it. You now need a separate meta-cognitive system to manage this uncertainty. And what if those options are uncertain? Now you need another meta-cognitive system!
Possible, I’m not arguing that a utility maximizing agent would be simpler, only that an agents whose preferences are encoded in a utility function (even a “simple” one like “number of paperclips in existence”) could be indecisive. Even if you have a simple utility function that gives you the utility of a world state, you might still have a lot of uncertainty about the current state of the world, and how your actions will impact the future. It seems very reasonable to represent that uncertainty one way or the other; in some cases the most rational action from a strictly utility-maximizing point of view is to defer the decision and aquire more information, even at a cost.
Possible, I’m not arguing that a utility maximizing agent would be simpler,
Good. ;-)
Only that an agents whose preferences are encoded in a utility function (even a “simple” one like “number of paperclips in existence”) could be indecisive.
Sure. But at that point, the “simplicity” of using utility functions disappears in a puff of smoke, as you need to design a metacognitive architecture to go with it.
One of the really elegant things about the way brains actually work, is that the metacognition is “all the way down”, and I’m rather fond of such architectures. (My predicate dispatcher, for instance, uses rules to understand rules, in the same sort of Escherian level-crossing bootstrap.)
The number could be the standard deviation of the probability distribution for the utility (the mean being the expected utility, which you would use for sorting purposes).
So if you (“you” being the linear-utility-maximizing agent) have two path of actions whose expected utility are close, but with a lot of uncertainty, it could be worth collecting more information to try to narrow down your probability distributions.\
It seems that an utility-maximizing agent could be in a state that could be qualified as “indecisive”.
But only if you add new entities to the model, thereby complicating it. You now need a separate meta-cognitive system to manage this uncertainty. And what if those options are uncertain? Now you need another meta-cognitive system!
Human brains, OTOH, represent all this stuff in a single layer. We can consider actions, meta-actions, and meta-meta-actions in the same process without skipping a beat.
Possible, I’m not arguing that a utility maximizing agent would be simpler, only that an agents whose preferences are encoded in a utility function (even a “simple” one like “number of paperclips in existence”) could be indecisive. Even if you have a simple utility function that gives you the utility of a world state, you might still have a lot of uncertainty about the current state of the world, and how your actions will impact the future. It seems very reasonable to represent that uncertainty one way or the other; in some cases the most rational action from a strictly utility-maximizing point of view is to defer the decision and aquire more information, even at a cost.
Good. ;-)
Sure. But at that point, the “simplicity” of using utility functions disappears in a puff of smoke, as you need to design a metacognitive architecture to go with it.
One of the really elegant things about the way brains actually work, is that the metacognition is “all the way down”, and I’m rather fond of such architectures. (My predicate dispatcher, for instance, uses rules to understand rules, in the same sort of Escherian level-crossing bootstrap.)