the correct way of setting up the problem is to require that our agent is indifferent to whether the other agent is a person (and conversely).
Some people may find it difficult to satisfy that requirement. In fact, most people are not indifferent.
A better approach, IMHO, is to stipulate that the published payoff matrix already ‘factors in’ any benevolence due to the other agent by reason of ethical considerations.
One objection to my approach might be that for a true utilitarian, there is no possible assignment of selfish utilities to outcomes that would result in the published payoff matrix as the post-ethical-reflection result. But, to my mind, this is just one more argument against utilitarianism as a coherent ethical theory.
Some people may find it difficult to satisfy that requirement. In fact, most people are not indifferent.
A better approach, IMHO, is to stipulate that the published payoff matrix already ‘factors in’ any benevolence due to the other agent by reason of ethical considerations.
One objection to my approach might be that for a true utilitarian, there is no possible assignment of selfish utilities to outcomes that would result in the published payoff matrix as the post-ethical-reflection result. But, to my mind, this is just one more argument against utilitarianism as a coherent ethical theory.
All people are not indifferent. (And not meant to qualify.)