What’s the utility function of the predictor? Is there necessarily a utility function for the predictor such that the predictor’s behavior (which is arbitrary) corresponds to maximizing its own utility? (Perhaps this is mentioned in the paper, which I’ll look at.)
EDIT: do you mean to reduce a 2-player game to a single-agent decision problem, instead of vice-versa?
Is there necessarily a utility function for the predictor such that the predictor’s behavior (which is arbitrary) corresponds to maximizing its own utility
You’re right, the predictor’s behavior might not be compatible with utility maximization against any beliefs. I guess we’re often interested in cases where we can think of the predictor as an agent. The predictor’s behavior might be irrational in the restrictive above sense,[1] but to the extent that we think of it as an agent, my guess is that we can still get away with using a game theoretic-flavored approach.
What’s the utility function of the predictor? Is there necessarily a utility function for the predictor such that the predictor’s behavior (which is arbitrary) corresponds to maximizing its own utility? (Perhaps this is mentioned in the paper, which I’ll look at.)
EDIT: do you mean to reduce a 2-player game to a single-agent decision problem, instead of vice-versa?
[Apologies for the delay]
You’re right, the predictor’s behavior might not be compatible with utility maximization against any beliefs. I guess we’re often interested in cases where we can think of the predictor as an agent. The predictor’s behavior might be irrational in the restrictive above sense,[1] but to the extent that we think of it as an agent, my guess is that we can still get away with using a game theoretic-flavored approach.
For instance, if the predictor is unaware of some crucial hypothesis, or applies mild optimization rather than expected value maximization