And I’ve given you such a model, which you’ve steadfastly refused to actually
“wrap” in this way, but instead you just keep asserting that it can be done.
If it’s so simple, why not do it and prove me wrong?
I have previously described the “wrapping” in question in some detail here.
A utility-based model can be made which is not significantly longer that the shortest possible model of the agent’s actions for this reason.
I have previously described the “wrapping” in question in some detail here.
Well, that provides me with enough information to realize that you don’t actually have a way to make utility functions into a reduction or simplification of the intelligence problem, so I’ll stop asking you to produce one.
A utility-based model can be made which is not significantly longer that the shortest possible model of the agent’s actions
The argument that, “utility-based systems can be made that aren’t that much more complex than just doing whatever you could’ve done in the first place”, is like saying that your new file format is awesome because it only uses a few bytes more than an existing similar format, to represent the exact same information… and without any other implementation advantages!
Simply wrap the I/O of the non-utility model, and then assign the (possibly compound) action the agent will actually take in each timestep utility 1 and assign all other actions a utility 0 - and then take the highest utility action in each timestep.
I’m not sure I understand—is this something that gives you an actual utility function that you can use, say, to getthe utility of various scenarios, calculate expected utility, etc.?
If you have an AI design to which you can provide a utility function to maximize (Instant AI! Just add Utility!), it seems that there are quite a few things that AI might want to do with the utility function that it can’t do with your model.
So it seems that you’re not only replacing the utility function, but also the bit that decides which action to do depending on that utility function. But I may have misunderstood you.
I have previously described the “wrapping” in question in some detail here.
A utility-based model can be made which is not significantly longer that the shortest possible model of the agent’s actions for this reason.
Well, that provides me with enough information to realize that you don’t actually have a way to make utility functions into a reduction or simplification of the intelligence problem, so I’ll stop asking you to produce one.
The argument that, “utility-based systems can be made that aren’t that much more complex than just doing whatever you could’ve done in the first place”, is like saying that your new file format is awesome because it only uses a few bytes more than an existing similar format, to represent the exact same information… and without any other implementation advantages!
Thanks, but I’ll pass.
(from the comment you linked)
I’m not sure I understand—is this something that gives you an actual utility function that you can use, say, to getthe utility of various scenarios, calculate expected utility, etc.?
If you have an AI design to which you can provide a utility function to maximize (Instant AI! Just add Utility!), it seems that there are quite a few things that AI might want to do with the utility function that it can’t do with your model.
So it seems that you’re not only replacing the utility function, but also the bit that decides which action to do depending on that utility function. But I may have misunderstood you.