Explicit dependence bias detected. How agent 1 will decide generally depends on how agent 2 will decide (not just on the actual action, but on the algorithm, that is on how the action is defined, not just on what is being defined). In multi-agent games, this can’t be sidestepped. And restatement of the problem can’t sever ambient dependencies.
I don’t see how that’s relevant. “I will release the child iff you give me the money, otherwise kill them” still looks like blackmail in a way “I will give you the money iff you give me the car, otherwise go shopping somewhere else” does not, even once the agents decided for whatever reason to make their dependencies explicit.
First I make no claims about the outcome of the negotiation so there is no way privileging any dependence over any other could bias my estimation thereof.
Second, I didn’t make any claim about any actual dependence, merely about communication, and it would certainly be in the interest of a would-be blackmailer to frame the dependence in the most inescapable way they can.
Third, agent 2 would need to be able to model communicated dependencies sensibly no matter whether it has a concept of blackmail or not, but while how it models the dependence internally would have a bearing on whether the blackmail would be successful that’s a separate problem and should have no influence on whether the agent can recognize the relative utilities.
I wasn’t thinking clearly; I don’t understand this as an instance of explicit dependence bias now, though it could be. I’ll be working on this question, but no deadlines.
Correction: Retracted, likely wrong.
Explicit dependence bias detected. How agent 1 will decide generally depends on how agent 2 will decide (not just on the actual action, but on the algorithm, that is on how the action is defined, not just on what is being defined). In multi-agent games, this can’t be sidestepped. And restatement of the problem can’t sever ambient dependencies.
I don’t see how that’s relevant. “I will release the child iff you give me the money, otherwise kill them” still looks like blackmail in a way “I will give you the money iff you give me the car, otherwise go shopping somewhere else” does not, even once the agents decided for whatever reason to make their dependencies explicit.
Bias denied.
First I make no claims about the outcome of the negotiation so there is no way privileging any dependence over any other could bias my estimation thereof.
Second, I didn’t make any claim about any actual dependence, merely about communication, and it would certainly be in the interest of a would-be blackmailer to frame the dependence in the most inescapable way they can.
Third, agent 2 would need to be able to model communicated dependencies sensibly no matter whether it has a concept of blackmail or not, but while how it models the dependence internally would have a bearing on whether the blackmail would be successful that’s a separate problem and should have no influence on whether the agent can recognize the relative utilities.
I wasn’t thinking clearly; I don’t understand this as an instance of explicit dependence bias now, though it could be. I’ll be working on this question, but no deadlines.