I read it, but I’m not at all sure it answers the question. It makes three points:
“if one takes the psychological preference approach (which derives choices from preferences), and not the revealed preference approach, it seems natural to define a preference relation as a potentially incomplete preorder, thereby allowing for the occasional “indecisiveness” of the agents”
I don’t see how an agent being indecisive is relevant to preference ordering. Not picking A or B is itself a choice—namely, the agent chooses not to pick either option.
2. “Secondly, there are economic instances in which a decision maker is in fact composed of several agents each with a possibly distinct objective function. For instance, in coalitional bargaining games, it is in the nature of things to specify the preferences of each coalition by means of a vector of utility functions (one for each member of the coalition), and this requires one to view the preference relation of each coalition as an incomplete preference relation.”
So, if the AI is made of multiple agents, each with its own utility function and we use a vector utility function to describe the AI… the AI still makes a particular choice between A and B (or it refuses to choose, which itself is a choice). Isn’t this a flaw of the vector-utility-function description, rather than a real property of the AI?
3. “The same reasoning applies to social choice problems; after all, the most commonly used social welfare ordering in economics, the Pareto dominance”
I read it, but I’m not at all sure it answers the question. It makes three points:
“if one takes the psychological preference approach (which derives choices from preferences), and not the revealed preference approach, it seems natural to define a preference relation as a potentially incomplete preorder, thereby allowing for the occasional “indecisiveness” of the agents”
I don’t see how an agent being indecisive is relevant to preference ordering. Not picking A or B is itself a choice—namely, the agent chooses not to pick either option.
2. “Secondly, there are economic instances in which a decision maker is in fact composed of several agents each with a possibly distinct objective function. For instance, in coalitional bargaining games, it is in the nature of things to specify the preferences of each coalition by means of a vector of utility functions (one for each member of the coalition), and this requires one to view the preference relation of each coalition as an incomplete preference relation.”
So, if the AI is made of multiple agents, each with its own utility function and we use a vector utility function to describe the AI… the AI still makes a particular choice between A and B (or it refuses to choose, which itself is a choice). Isn’t this a flaw of the vector-utility-function description, rather than a real property of the AI?
3. “The same reasoning applies to social choice problems; after all, the most commonly used social welfare ordering in economics, the Pareto dominance”
I’m not sure how this is related to AI.
Do you have any ideas?