Agent 1 negotiates with agent 2.
Agent 1 can take option A or B, while agent 2 can take option C or D.
Agent 1 communicates that they will take option A if agent 2 takes option C and will take option B if agent 2 takes option D.
If utilities are such that for
agent 1: A > B, C < D, A+C < B + D
and for
agent 2: A < B, C > D, A+C < B + D
or
agent 1: A < B, C > D, A+C > B + D
agent 2: A > B, C < D, A+C > B + D
this is an offer.
If
agent 1: A < B, C < D, A+C < B + D
agent 2: A < B, C > D, A+C < B + D
or
agent 1: A > B, C > D, A+C > B + D
agent 2: A > B, C < D, A+C > B + D
this is blackmail by agent 1.
If
agent 1: A > B, C < D, A+C < B + D
agent 2: A < B, C < D, A+C < B + D
or
agent 1: A < B, C > D, A+C > B + D
agent 2: A > B, C > D, A+C > B + D
this is agent 1 giving in to agent 2′s blackmail.
I don’t think I mentioned anything about any “default” anywhere?
(Unless I overlooked something in the other cases there is either no reason to negotiate, no prospect of success in negotiating or at least one party acting irrationally. It is implicitly assumed that preferences between combinations of the options only depend on the preferences between the individual options. )
I think we can do better than that. In cases where the law is morally justified, punishing someone for a crime is retaliation. I think part of the intent of the concept of blackmail is that the threatened harm be unprovoked.
Explicit dependence bias detected. How agent 1 will decide generally depends on how agent 2 will decide (not just on the actual action, but on the algorithm, that is on how the action is defined, not just on what is being defined). In multi-agent games, this can’t be sidestepped. And restatement of the problem can’t sever ambient dependencies.
I don’t see how that’s relevant. “I will release the child iff you give me the money, otherwise kill them” still looks like blackmail in a way “I will give you the money iff you give me the car, otherwise go shopping somewhere else” does not, even once the agents decided for whatever reason to make their dependencies explicit.
First I make no claims about the outcome of the negotiation so there is no way privileging any dependence over any other could bias my estimation thereof.
Second, I didn’t make any claim about any actual dependence, merely about communication, and it would certainly be in the interest of a would-be blackmailer to frame the dependence in the most inescapable way they can.
Third, agent 2 would need to be able to model communicated dependencies sensibly no matter whether it has a concept of blackmail or not, but while how it models the dependence internally would have a bearing on whether the blackmail would be successful that’s a separate problem and should have no influence on whether the agent can recognize the relative utilities.
I wasn’t thinking clearly; I don’t understand this as an instance of explicit dependence bias now, though it could be. I’ll be working on this question, but no deadlines.
Agent 1 negotiates with agent 2. Agent 1 can take option A or B, while agent 2 can take option C or D. Agent 1 communicates that they will take option A if agent 2 takes option C and will take option B if agent 2 takes option D.
If utilities are such that for
agent 1: A > B, C < D, A+C < B + D
and for
agent 2: A < B, C > D, A+C < B + D
or
agent 1: A < B, C > D, A+C > B + D
agent 2: A > B, C < D, A+C > B + D
this is an offer.
If
agent 1: A < B, C < D, A+C < B + D
agent 2: A < B, C > D, A+C < B + D
or
agent 1: A > B, C > D, A+C > B + D
agent 2: A > B, C < D, A+C > B + D
this is blackmail by agent 1.
If
agent 1: A > B, C < D, A+C < B + D
agent 2: A < B, C < D, A+C < B + D
or
agent 1: A < B, C > D, A+C > B + D
agent 2: A > B, C > D, A+C > B + D
this is agent 1 giving in to agent 2′s blackmail.
I don’t think I mentioned anything about any “default” anywhere?
(Unless I overlooked something in the other cases there is either no reason to negotiate, no prospect of success in negotiating or at least one party acting irrationally. It is implicitly assumed that preferences between combinations of the options only depend on the preferences between the individual options. )
Notice that under this definition punishing someone for a crime is a form of blackmail.
I’m not sure that’s a problem.
Or maybe: Change blackmail in the above to threat, and define blackmail as a threat not legitimized by social conventions.
Well, at least we’ve unpacked the concept of “default” into the concept of social conventions.
Or into a concept of ethics. Blackmail involves a threat of unethical punishment.
I think we can do better than that. In cases where the law is morally justified, punishing someone for a crime is retaliation. I think part of the intent of the concept of blackmail is that the threatened harm be unprovoked.
I don’t understand your notation. What does A > B mean? The utilities of A and B depend on whether the other player chooses C or D, no?
Correction: Retracted, likely wrong.
Explicit dependence bias detected. How agent 1 will decide generally depends on how agent 2 will decide (not just on the actual action, but on the algorithm, that is on how the action is defined, not just on what is being defined). In multi-agent games, this can’t be sidestepped. And restatement of the problem can’t sever ambient dependencies.
I don’t see how that’s relevant. “I will release the child iff you give me the money, otherwise kill them” still looks like blackmail in a way “I will give you the money iff you give me the car, otherwise go shopping somewhere else” does not, even once the agents decided for whatever reason to make their dependencies explicit.
Bias denied.
First I make no claims about the outcome of the negotiation so there is no way privileging any dependence over any other could bias my estimation thereof.
Second, I didn’t make any claim about any actual dependence, merely about communication, and it would certainly be in the interest of a would-be blackmailer to frame the dependence in the most inescapable way they can.
Third, agent 2 would need to be able to model communicated dependencies sensibly no matter whether it has a concept of blackmail or not, but while how it models the dependence internally would have a bearing on whether the blackmail would be successful that’s a separate problem and should have no influence on whether the agent can recognize the relative utilities.
I wasn’t thinking clearly; I don’t understand this as an instance of explicit dependence bias now, though it could be. I’ll be working on this question, but no deadlines.