It is trivial—there are some set of behaviours associated with those (usually facial expressions) - so you just assign them high utility under the conditions involved.
No, I mean the behaviors of uncertainty itself: seeking more information, trying to find other ways of ranking, inventing new approaches, questioning whether one is looking at the problem in the right way...
The triggering conditions for this type of behavior are straightforward in a multidimensional tolerance calculation, so a multi-valued agent can notice when it is confused or uncertain.
How do you represent that uncertainty in a number, or a sorted list of numbers representing the utility of various choices? How do you know whether maybe none of the choices on the table are acceptable?
AFAICT, the entire notion of a cognitive architecture based on “pick options by utility” is based on a bogus assumption that you know what all the options are in the first place! (i.e., a nice frictionless plane assumption to go with the spherical cow assumption that humans are economic agents.)
(Note that in contrast, tolerance-based cognition can simply hunt for alternatives until satisficing occurs. It doesn’t have to know it has all the options, unless it has a low tolerance for “not knowing all the options”.)
How do you represent that uncertainty in a number, or a sorted list of numbers representing the utility of various choices?
The number could be the standard deviation of the probability distribution for the utility (the mean being the expected utility, which you would use for sorting purposes).
So if you (“you” being the linear-utility-maximizing agent) have two path of actions whose expected utility are close, but with a lot of uncertainty, it could be worth collecting more information to try to narrow down your probability distributions.\
It seems that an utility-maximizing agent could be in a state that could be qualified as “indecisive”.
It seems that an utility-maximizing agent could be in a state that could be qualified as “indecisive”.
But only if you add new entities to the model, thereby complicating it. You now need a separate meta-cognitive system to manage this uncertainty. And what if those options are uncertain? Now you need another meta-cognitive system!
Human brains, OTOH, represent all this stuff in a single layer. We can consider actions, meta-actions, and meta-meta-actions in the same process without skipping a beat.
But only if you add new entities to the model, thereby complicating it. You now need a separate meta-cognitive system to manage this uncertainty. And what if those options are uncertain? Now you need another meta-cognitive system!
Possible, I’m not arguing that a utility maximizing agent would be simpler, only that an agents whose preferences are encoded in a utility function (even a “simple” one like “number of paperclips in existence”) could be indecisive. Even if you have a simple utility function that gives you the utility of a world state, you might still have a lot of uncertainty about the current state of the world, and how your actions will impact the future. It seems very reasonable to represent that uncertainty one way or the other; in some cases the most rational action from a strictly utility-maximizing point of view is to defer the decision and aquire more information, even at a cost.
Possible, I’m not arguing that a utility maximizing agent would be simpler,
Good. ;-)
Only that an agents whose preferences are encoded in a utility function (even a “simple” one like “number of paperclips in existence”) could be indecisive.
Sure. But at that point, the “simplicity” of using utility functions disappears in a puff of smoke, as you need to design a metacognitive architecture to go with it.
One of the really elegant things about the way brains actually work, is that the metacognition is “all the way down”, and I’m rather fond of such architectures. (My predicate dispatcher, for instance, uses rules to understand rules, in the same sort of Escherian level-crossing bootstrap.)
The options utility is assigned to are the agents possible actions—all of them—at a moment in time. An action mostly boils down to a list of voltages in every motor fibre, and there are an awful lot of possible actions for a human. It is impossible for an action not to be “in the table”—the table includes all possible actions.
The options utility is assigned to are the agents possible actions—all of them—at a moment in time. An action mostly boils down to a list of voltages in every motor fibre, and there are an awful lot of possible actions for a human. It is impossible for an action not to be “in the table”—the table includes all possible actions.
Not if it’s limited to motor fibers, it doesn’t. You’re still ignoring meta-cognition (you dodged that bit of my comment entirely!), let alone the part where an “action” can be something like choosing a goal.
If you still don’t see how this model is to humans what a sphere is to a cow (i.e. something nearly, but not quite entirely unlike the real thing), I really don’t know what else to say.
You may find it useful to compare with a chess or go computer. They typically assign utlities to moves on the board, and not to their own mental processing. You could assign utilities to various mental tasks as well as physical ones—to what extent it is useful to do so depends on the modelling needs of you and the system.
You may find it useful to compare with a chess or go computer.
In other words, a sub-human intelligence level. (Sub-animal intelligence, even.)
They typically assign utlities to moves on the board, and not to their own mental processing. You could assign utilities to various mental tasks as well as physical ones—to what extent it is useful to do so depends on the modelling needs of you and the system.
You’re still avoiding the point. You claimed utility was a good way of modeling humans. So, show me a nice elegant model of human intelligence based on utiilty maximization.
Like I already explained, utility functions can model any computable agent. Don’t expect me to produce the human utility function, though!
Utility functions are about as good as any other model. That’s because if you have any other model of what an agent does, you can pretty simply “wrap” it—and turn it into a utility-based framework.
A giant look-up table can model any computable agent as well. Utility functions have the potential advantage of explicitly providing a relatively concise representation, though. If you can obtain a compressed version of your theory, that is good.
That’s because if you have any other model of what an agent does, you can pretty simply “wrap” it—and turn it into a utility-based framework.
And I’ve given you such a model, which you’ve steadfastly refused to actually “wrap” in this way, but instead you just keep asserting that it can be done. If it’s so simple, why not do it and prove me wrong?
I’m not even asking you to model a full human or even the teeniest fraction of one. Just show me how to manage metacognitive behaviors (of the types discussed in this thread) using your model “compute utility for all possible actions and then pick the best.”
Show me how that would work for behaviors that affect the selection process, and that should be sufficient to demonstrate that utility function-based behavior isn’t completely worthless as a basis for creating a “thinking” intelligence.
(Note, however, that if in the process of implementing this, you have to shove the metacognition into the computation of the utility function, then you are just proving my point: the utility function at that point isn’t actually compressing anything, and is thus as useless a model as saying “everything is fire”.)
And I’ve given you such a model, which you’ve steadfastly refused to actually
“wrap” in this way, but instead you just keep asserting that it can be done.
If it’s so simple, why not do it and prove me wrong?
I have previously described the “wrapping” in question in some detail here.
A utility-based model can be made which is not significantly longer that the shortest possible model of the agent’s actions for this reason.
I have previously described the “wrapping” in question in some detail here.
Well, that provides me with enough information to realize that you don’t actually have a way to make utility functions into a reduction or simplification of the intelligence problem, so I’ll stop asking you to produce one.
A utility-based model can be made which is not significantly longer that the shortest possible model of the agent’s actions
The argument that, “utility-based systems can be made that aren’t that much more complex than just doing whatever you could’ve done in the first place”, is like saying that your new file format is awesome because it only uses a few bytes more than an existing similar format, to represent the exact same information… and without any other implementation advantages!
Simply wrap the I/O of the non-utility model, and then assign the (possibly compound) action the agent will actually take in each timestep utility 1 and assign all other actions a utility 0 - and then take the highest utility action in each timestep.
I’m not sure I understand—is this something that gives you an actual utility function that you can use, say, to getthe utility of various scenarios, calculate expected utility, etc.?
If you have an AI design to which you can provide a utility function to maximize (Instant AI! Just add Utility!), it seems that there are quite a few things that AI might want to do with the utility function that it can’t do with your model.
So it seems that you’re not only replacing the utility function, but also the bit that decides which action to do depending on that utility function. But I may have misunderstood you.
No, I mean the behaviors of uncertainty itself: seeking more information, trying to find other ways of ranking, inventing new approaches, questioning whether one is looking at the problem in the right way...
The triggering conditions for this type of behavior are straightforward in a multidimensional tolerance calculation, so a multi-valued agent can notice when it is confused or uncertain.
How do you represent that uncertainty in a number, or a sorted list of numbers representing the utility of various choices? How do you know whether maybe none of the choices on the table are acceptable?
AFAICT, the entire notion of a cognitive architecture based on “pick options by utility” is based on a bogus assumption that you know what all the options are in the first place! (i.e., a nice frictionless plane assumption to go with the spherical cow assumption that humans are economic agents.)
(Note that in contrast, tolerance-based cognition can simply hunt for alternatives until satisficing occurs. It doesn’t have to know it has all the options, unless it has a low tolerance for “not knowing all the options”.)
The number could be the standard deviation of the probability distribution for the utility (the mean being the expected utility, which you would use for sorting purposes).
So if you (“you” being the linear-utility-maximizing agent) have two path of actions whose expected utility are close, but with a lot of uncertainty, it could be worth collecting more information to try to narrow down your probability distributions.\
It seems that an utility-maximizing agent could be in a state that could be qualified as “indecisive”.
But only if you add new entities to the model, thereby complicating it. You now need a separate meta-cognitive system to manage this uncertainty. And what if those options are uncertain? Now you need another meta-cognitive system!
Human brains, OTOH, represent all this stuff in a single layer. We can consider actions, meta-actions, and meta-meta-actions in the same process without skipping a beat.
Possible, I’m not arguing that a utility maximizing agent would be simpler, only that an agents whose preferences are encoded in a utility function (even a “simple” one like “number of paperclips in existence”) could be indecisive. Even if you have a simple utility function that gives you the utility of a world state, you might still have a lot of uncertainty about the current state of the world, and how your actions will impact the future. It seems very reasonable to represent that uncertainty one way or the other; in some cases the most rational action from a strictly utility-maximizing point of view is to defer the decision and aquire more information, even at a cost.
Good. ;-)
Sure. But at that point, the “simplicity” of using utility functions disappears in a puff of smoke, as you need to design a metacognitive architecture to go with it.
One of the really elegant things about the way brains actually work, is that the metacognition is “all the way down”, and I’m rather fond of such architectures. (My predicate dispatcher, for instance, uses rules to understand rules, in the same sort of Escherian level-crossing bootstrap.)
The options utility is assigned to are the agents possible actions—all of them—at a moment in time. An action mostly boils down to a list of voltages in every motor fibre, and there are an awful lot of possible actions for a human. It is impossible for an action not to be “in the table”—the table includes all possible actions.
Not if it’s limited to motor fibers, it doesn’t. You’re still ignoring meta-cognition (you dodged that bit of my comment entirely!), let alone the part where an “action” can be something like choosing a goal.
If you still don’t see how this model is to humans what a sphere is to a cow (i.e. something nearly, but not quite entirely unlike the real thing), I really don’t know what else to say.
You may find it useful to compare with a chess or go computer. They typically assign utlities to moves on the board, and not to their own mental processing. You could assign utilities to various mental tasks as well as physical ones—to what extent it is useful to do so depends on the modelling needs of you and the system.
In other words, a sub-human intelligence level. (Sub-animal intelligence, even.)
You’re still avoiding the point. You claimed utility was a good way of modeling humans. So, show me a nice elegant model of human intelligence based on utiilty maximization.
Like I already explained, utility functions can model any computable agent. Don’t expect me to produce the human utility function, though!
Utility functions are about as good as any other model. That’s because if you have any other model of what an agent does, you can pretty simply “wrap” it—and turn it into a utility-based framework.
Yes, at the level of a giant look-up table. At that point it is not a useful abstraction.
A giant look-up table can model any computable agent as well. Utility functions have the potential advantage of explicitly providing a relatively concise representation, though. If you can obtain a compressed version of your theory, that is good.
And I’ve given you such a model, which you’ve steadfastly refused to actually “wrap” in this way, but instead you just keep asserting that it can be done. If it’s so simple, why not do it and prove me wrong?
I’m not even asking you to model a full human or even the teeniest fraction of one. Just show me how to manage metacognitive behaviors (of the types discussed in this thread) using your model “compute utility for all possible actions and then pick the best.”
Show me how that would work for behaviors that affect the selection process, and that should be sufficient to demonstrate that utility function-based behavior isn’t completely worthless as a basis for creating a “thinking” intelligence.
(Note, however, that if in the process of implementing this, you have to shove the metacognition into the computation of the utility function, then you are just proving my point: the utility function at that point isn’t actually compressing anything, and is thus as useless a model as saying “everything is fire”.)
I have previously described the “wrapping” in question in some detail here.
A utility-based model can be made which is not significantly longer that the shortest possible model of the agent’s actions for this reason.
Well, that provides me with enough information to realize that you don’t actually have a way to make utility functions into a reduction or simplification of the intelligence problem, so I’ll stop asking you to produce one.
The argument that, “utility-based systems can be made that aren’t that much more complex than just doing whatever you could’ve done in the first place”, is like saying that your new file format is awesome because it only uses a few bytes more than an existing similar format, to represent the exact same information… and without any other implementation advantages!
Thanks, but I’ll pass.
(from the comment you linked)
I’m not sure I understand—is this something that gives you an actual utility function that you can use, say, to getthe utility of various scenarios, calculate expected utility, etc.?
If you have an AI design to which you can provide a utility function to maximize (Instant AI! Just add Utility!), it seems that there are quite a few things that AI might want to do with the utility function that it can’t do with your model.
So it seems that you’re not only replacing the utility function, but also the bit that decides which action to do depending on that utility function. But I may have misunderstood you.
I didn’t ignore non-motor actions—that is why I wrote “mostly”.