Commenting with Medium feels like it would be reverse anonymity—if you merely see my real name and facebook profile, you won’t know who I am :P
It’s tempting to drag in utility functions over actions. So I will. VNM proved that VNM-rational agents have them, after all. Rather than trying to learn my utility function over outcomes, you seem to be saying, why not try to learn my utility function over actions?
These seem somewhat equivalent—one should be a transform of the other. And what seems odd is that you’re arguing (reasonably) that using limited resources to learn the utility function over actions performs better than using those resources to learn the utility function over outcomes—even according to the utility function over outcomes!
Note that the agent is never faced with a gamble over actions—it can choose to deterministically take whatever action it desires. So while VNM gives you a utility function over actions, it is probably uninteresting.
The broader point—that we are learning some transform of preferences, rather than learning preferences directly—seems true. I think this is an issue that people in AI have had some (limited) contacted with. Some algorithms learn “what a human would do” (e.g. learning to play go by predicting human go moves and doing what you think a human would do). Other algorithms, (inverse reinforcement learning) learn what values explain what a human would do, and then pursue those. I think the conventional view is that inverse reinforcement learning is harder, but can yield more robust policies that generalize better. Our situation seems to be somewhat different, and it might be interesting to understand why and to explore the comparison more thoroughly.
Commenting with Medium feels like it would be reverse anonymity—if you merely see my real name and facebook profile, you won’t know who I am :P
It’s tempting to drag in utility functions over actions. So I will. VNM proved that VNM-rational agents have them, after all. Rather than trying to learn my utility function over outcomes, you seem to be saying, why not try to learn my utility function over actions?
These seem somewhat equivalent—one should be a transform of the other. And what seems odd is that you’re arguing (reasonably) that using limited resources to learn the utility function over actions performs better than using those resources to learn the utility function over outcomes—even according to the utility function over outcomes!
I wonder if there’s a theorem here.
Note that the agent is never faced with a gamble over actions—it can choose to deterministically take whatever action it desires. So while VNM gives you a utility function over actions, it is probably uninteresting.
The broader point—that we are learning some transform of preferences, rather than learning preferences directly—seems true. I think this is an issue that people in AI have had some (limited) contacted with. Some algorithms learn “what a human would do” (e.g. learning to play go by predicting human go moves and doing what you think a human would do). Other algorithms, (inverse reinforcement learning) learn what values explain what a human would do, and then pursue those. I think the conventional view is that inverse reinforcement learning is harder, but can yield more robust policies that generalize better. Our situation seems to be somewhat different, and it might be interesting to understand why and to explore the comparison more thoroughly.