This is a clever idea, but I don’t think it works: you need to unpack the question of why a decision algorithm would deem cooperation non-optimal, and see if it coincides with a special class of problems where cooperation is generally non-optimal.
So I think what gets an offer labeled as blackmail is the recognition that cooperation would lead the other party to repeatedly use their discretion to force my next remaining options to be even worse. So blackmail and trade differ in that:
If I cooperate wth a blackmailer, they are more likely to spend resources “digging up dirt” on me, kidnapping my loved ones, etc. I don’t want to be in that position, regardless of what I decide to do then.
If I trade with a trade-offerer, they are more likely to spend resources acquiring goods that I may want to trade for. I do want to be in the position where others make things available to me that I want (except for where I’d be competing with them in that process.)
And yes, these two situations are equivalent, except for what I want the offerer to do, which I think is what yields the distinction, not the concept of a baseline in the initial offer.
You can phrase blackmail as a sort of addiction situation where dynamic inconsistency potentially leaves me vulnerable to exploitation. My preferences at any time t are:
1) Not have an addiction. 2) Have an addiction, and take some more of the drug. 3) Have an addiction, and not take the drug.
where I’m addicted at time t, and taking the drug will make me addicted in time t+1 (and i otherwise won’t be addicted in t+1).
In this light, one can view the classification of something as blackmail, as being any feeling or mechanism that makes me choose 3) over 2). “2 looks appealing, but I feel a strong compulsion to do 3.” Agents with such a mechanism gain a resistance to dynamic inconsistency.
In contrast, if “addiction” were good, and the item in 1) were moved below 3) in my preference ranking, then I wouldn’t benefit from a mechanism that makes me choose 3 over 2. That would feel like trade.
And yes, these two situations are equivalent, except for what I want the offerer to do, which I think is what yields the distinction, not the concept of a baseline in the initial offer.
Yes, the distinction is in the way you prefer to acausally observation-counterfactually influence the other player. Not being offered a trade shouldn’t be considered irrelevant by your decision algorithm, even if given the observations you have it is impossible. Like in Counterfactual Mugging, but with the other player instead of a fair coin. Newcomb’s with transparent boxes is also relevant.
Time-inconsistency seems unrelated. It may be a problem in implementing the strategy “don’t respond to blackmail”, but one can certainly TRY to blackmail a time-consistent person, if one believes them to be irrational or if they have only one blackmail-worthy secret.
This is a clever idea, but I don’t think it works: you need to unpack the question of why a decision algorithm would deem cooperation non-optimal, and see if it coincides with a special class of problems where cooperation is generally non-optimal.
So I think what gets an offer labeled as blackmail is the recognition that cooperation would lead the other party to repeatedly use their discretion to force my next remaining options to be even worse. So blackmail and trade differ in that:
If I cooperate wth a blackmailer, they are more likely to spend resources “digging up dirt” on me, kidnapping my loved ones, etc. I don’t want to be in that position, regardless of what I decide to do then.
If I trade with a trade-offerer, they are more likely to spend resources acquiring goods that I may want to trade for. I do want to be in the position where others make things available to me that I want (except for where I’d be competing with them in that process.)
And yes, these two situations are equivalent, except for what I want the offerer to do, which I think is what yields the distinction, not the concept of a baseline in the initial offer.
You can phrase blackmail as a sort of addiction situation where dynamic inconsistency potentially leaves me vulnerable to exploitation. My preferences at any time t are:
1) Not have an addiction.
2) Have an addiction, and take some more of the drug.
3) Have an addiction, and not take the drug.
where I’m addicted at time t, and taking the drug will make me addicted in time t+1 (and i otherwise won’t be addicted in t+1).
In this light, one can view the classification of something as blackmail, as being any feeling or mechanism that makes me choose 3) over 2). “2 looks appealing, but I feel a strong compulsion to do 3.” Agents with such a mechanism gain a resistance to dynamic inconsistency.
In contrast, if “addiction” were good, and the item in 1) were moved below 3) in my preference ranking, then I wouldn’t benefit from a mechanism that makes me choose 3 over 2. That would feel like trade.
Yes, the distinction is in the way you prefer to acausally observation-counterfactually influence the other player. Not being offered a trade shouldn’t be considered irrelevant by your decision algorithm, even if given the observations you have it is impossible. Like in Counterfactual Mugging, but with the other player instead of a fair coin. Newcomb’s with transparent boxes is also relevant.
Exactly, which is why I consider the hazing problem to be isomorphic to CM, and akrasia to be a special case of the hazing problem.
Time-inconsistency seems unrelated. It may be a problem in implementing the strategy “don’t respond to blackmail”, but one can certainly TRY to blackmail a time-consistent person, if one believes them to be irrational or if they have only one blackmail-worthy secret.