You may find it useful to compare with a chess or go computer. They typically assign utlities to moves on the board, and not to their own mental processing. You could assign utilities to various mental tasks as well as physical ones—to what extent it is useful to do so depends on the modelling needs of you and the system.
You may find it useful to compare with a chess or go computer.
In other words, a sub-human intelligence level. (Sub-animal intelligence, even.)
They typically assign utlities to moves on the board, and not to their own mental processing. You could assign utilities to various mental tasks as well as physical ones—to what extent it is useful to do so depends on the modelling needs of you and the system.
You’re still avoiding the point. You claimed utility was a good way of modeling humans. So, show me a nice elegant model of human intelligence based on utiilty maximization.
Like I already explained, utility functions can model any computable agent. Don’t expect me to produce the human utility function, though!
Utility functions are about as good as any other model. That’s because if you have any other model of what an agent does, you can pretty simply “wrap” it—and turn it into a utility-based framework.
A giant look-up table can model any computable agent as well. Utility functions have the potential advantage of explicitly providing a relatively concise representation, though. If you can obtain a compressed version of your theory, that is good.
That’s because if you have any other model of what an agent does, you can pretty simply “wrap” it—and turn it into a utility-based framework.
And I’ve given you such a model, which you’ve steadfastly refused to actually “wrap” in this way, but instead you just keep asserting that it can be done. If it’s so simple, why not do it and prove me wrong?
I’m not even asking you to model a full human or even the teeniest fraction of one. Just show me how to manage metacognitive behaviors (of the types discussed in this thread) using your model “compute utility for all possible actions and then pick the best.”
Show me how that would work for behaviors that affect the selection process, and that should be sufficient to demonstrate that utility function-based behavior isn’t completely worthless as a basis for creating a “thinking” intelligence.
(Note, however, that if in the process of implementing this, you have to shove the metacognition into the computation of the utility function, then you are just proving my point: the utility function at that point isn’t actually compressing anything, and is thus as useless a model as saying “everything is fire”.)
And I’ve given you such a model, which you’ve steadfastly refused to actually
“wrap” in this way, but instead you just keep asserting that it can be done.
If it’s so simple, why not do it and prove me wrong?
I have previously described the “wrapping” in question in some detail here.
A utility-based model can be made which is not significantly longer that the shortest possible model of the agent’s actions for this reason.
I have previously described the “wrapping” in question in some detail here.
Well, that provides me with enough information to realize that you don’t actually have a way to make utility functions into a reduction or simplification of the intelligence problem, so I’ll stop asking you to produce one.
A utility-based model can be made which is not significantly longer that the shortest possible model of the agent’s actions
The argument that, “utility-based systems can be made that aren’t that much more complex than just doing whatever you could’ve done in the first place”, is like saying that your new file format is awesome because it only uses a few bytes more than an existing similar format, to represent the exact same information… and without any other implementation advantages!
Simply wrap the I/O of the non-utility model, and then assign the (possibly compound) action the agent will actually take in each timestep utility 1 and assign all other actions a utility 0 - and then take the highest utility action in each timestep.
I’m not sure I understand—is this something that gives you an actual utility function that you can use, say, to getthe utility of various scenarios, calculate expected utility, etc.?
If you have an AI design to which you can provide a utility function to maximize (Instant AI! Just add Utility!), it seems that there are quite a few things that AI might want to do with the utility function that it can’t do with your model.
So it seems that you’re not only replacing the utility function, but also the bit that decides which action to do depending on that utility function. But I may have misunderstood you.
You may find it useful to compare with a chess or go computer. They typically assign utlities to moves on the board, and not to their own mental processing. You could assign utilities to various mental tasks as well as physical ones—to what extent it is useful to do so depends on the modelling needs of you and the system.
In other words, a sub-human intelligence level. (Sub-animal intelligence, even.)
You’re still avoiding the point. You claimed utility was a good way of modeling humans. So, show me a nice elegant model of human intelligence based on utiilty maximization.
Like I already explained, utility functions can model any computable agent. Don’t expect me to produce the human utility function, though!
Utility functions are about as good as any other model. That’s because if you have any other model of what an agent does, you can pretty simply “wrap” it—and turn it into a utility-based framework.
Yes, at the level of a giant look-up table. At that point it is not a useful abstraction.
A giant look-up table can model any computable agent as well. Utility functions have the potential advantage of explicitly providing a relatively concise representation, though. If you can obtain a compressed version of your theory, that is good.
And I’ve given you such a model, which you’ve steadfastly refused to actually “wrap” in this way, but instead you just keep asserting that it can be done. If it’s so simple, why not do it and prove me wrong?
I’m not even asking you to model a full human or even the teeniest fraction of one. Just show me how to manage metacognitive behaviors (of the types discussed in this thread) using your model “compute utility for all possible actions and then pick the best.”
Show me how that would work for behaviors that affect the selection process, and that should be sufficient to demonstrate that utility function-based behavior isn’t completely worthless as a basis for creating a “thinking” intelligence.
(Note, however, that if in the process of implementing this, you have to shove the metacognition into the computation of the utility function, then you are just proving my point: the utility function at that point isn’t actually compressing anything, and is thus as useless a model as saying “everything is fire”.)
I have previously described the “wrapping” in question in some detail here.
A utility-based model can be made which is not significantly longer that the shortest possible model of the agent’s actions for this reason.
Well, that provides me with enough information to realize that you don’t actually have a way to make utility functions into a reduction or simplification of the intelligence problem, so I’ll stop asking you to produce one.
The argument that, “utility-based systems can be made that aren’t that much more complex than just doing whatever you could’ve done in the first place”, is like saying that your new file format is awesome because it only uses a few bytes more than an existing similar format, to represent the exact same information… and without any other implementation advantages!
Thanks, but I’ll pass.
(from the comment you linked)
I’m not sure I understand—is this something that gives you an actual utility function that you can use, say, to getthe utility of various scenarios, calculate expected utility, etc.?
If you have an AI design to which you can provide a utility function to maximize (Instant AI! Just add Utility!), it seems that there are quite a few things that AI might want to do with the utility function that it can’t do with your model.
So it seems that you’re not only replacing the utility function, but also the bit that decides which action to do depending on that utility function. But I may have misunderstood you.