If you want to pursue the subject, I recommend googling Law Review articles about blackmail—the lawyers have to deal with this problem as a practical matter. Ignore the ones which dive deep into case law details, but look at ones which take the philosophical start-from-first-principles approach.
This is the first time I’ve heard of this dilemma (so this post is really just thinking aloud). It seems to me that trade usually doesn’t require agents to engage in deep modeling of each other’s behaviour. If I go down to the market place and offer the man at the stall £5 for a pair of shoes, and he declines and I walk away—the furthest thing from my mind is trying to step through my model of human behaviour to figure out how to persuade him to accept. I had a simple model—to wit that the £5 was sufficient incentive to effect the trade—and when that model turned out false I just abandoned negotiations without trying to calculate the incentive effects of doing so.
This isn’t to say that everything involving deep modeling of human behaviour is necessarily an instance of extortion, though the converse would seem to hold ( every act of extortion involves some higher-order modeling between extorter and victim ).
However, extortion usually involves the extorter trying to increase the cost of select outcomes above what they would be had the extorter not explicitly acted to increase them, which is why deep modeling of the victim is required. Unless my costly, deep model of my trading partner is paying rent to me ( with respect to a given episode of negotiation ) only in some way that does not involve allowing me to increase the cost of a certain set of outcomes to him in some negotiation, I am probably engaging in extortion.
If I walk way from a market stall with the intent of provoking the seller into lowering his price—I’m not increasing the cost of any outcome to him. The cost of me walking away is a constant. So in this case my model of his behaviour is not aimed at increasing the cost of any outcome to him—I’m effectively simply placing a bet. If I threaten to break his legs if he refuses the sale, that’s placing a bet on a rigged game.
What of companies that spend millions analysing markets before setting their prices? That seems to involve deep modelling, yet is canonically seen as trade.
They usually don’t have any way to leverage their models to increase the cost of not buying their product or service though; so such a situation is still missing at least one criterion.
There is a complication involved since its possible to increase the cost to others of not doing business with you in “fair” ways. E.g. the invention of the fax machine reduced effective demand for message boys to run between office buildings, hence increasing their cost and the operating costs of anyone who refused to buy a fax machine.
Though I don’t believe any company long held a monopoly on the fax market, if a company did establish such a monopoly in order to control prices that again may be construed as extortion.
They usually don’t have any way to leverage their models to increase the cost of not buying their product or service though; so such a situation is still missing at least one criterion.
Modern social networks and messaging networks would seem to be a strong counterexample. Any software with both network effects and intentional lock-in mechanisms, really.
And honestly, calling such products a blend of extortion and trade seems intuitively about right.
To try to get at the extortion / trade distinction a bit better:
Schelling gives us definitions of promises and threats, and also observes there are things that are a blend of the two. The blend is actually fairly common! I expect there’s something analogous with extortion and trade: you can probably come up with pure examples of both, but in practice a lot of examples will be a blend. And a lot of the ‘things we want to allow’ will look like ‘mostly trade with a dash of extortion’ or ‘mostly trade but both sides also seem to be doing some extortion’.
Still nowhere :-(
If you want to pursue the subject, I recommend googling Law Review articles about blackmail—the lawyers have to deal with this problem as a practical matter. Ignore the ones which dive deep into case law details, but look at ones which take the philosophical start-from-first-principles approach.
Are you familiar with that literature?
I scanned through some of it long time ago, so I know it exists but, unfortunately, cannot offer links.
This is the first time I’ve heard of this dilemma (so this post is really just thinking aloud). It seems to me that trade usually doesn’t require agents to engage in deep modeling of each other’s behaviour. If I go down to the market place and offer the man at the stall £5 for a pair of shoes, and he declines and I walk away—the furthest thing from my mind is trying to step through my model of human behaviour to figure out how to persuade him to accept. I had a simple model—to wit that the £5 was sufficient incentive to effect the trade—and when that model turned out false I just abandoned negotiations without trying to calculate the incentive effects of doing so.
This isn’t to say that everything involving deep modeling of human behaviour is necessarily an instance of extortion, though the converse would seem to hold ( every act of extortion involves some higher-order modeling between extorter and victim ). However, extortion usually involves the extorter trying to increase the cost of select outcomes above what they would be had the extorter not explicitly acted to increase them, which is why deep modeling of the victim is required. Unless my costly, deep model of my trading partner is paying rent to me ( with respect to a given episode of negotiation ) only in some way that does not involve allowing me to increase the cost of a certain set of outcomes to him in some negotiation, I am probably engaging in extortion.
If I walk way from a market stall with the intent of provoking the seller into lowering his price—I’m not increasing the cost of any outcome to him. The cost of me walking away is a constant. So in this case my model of his behaviour is not aimed at increasing the cost of any outcome to him—I’m effectively simply placing a bet. If I threaten to break his legs if he refuses the sale, that’s placing a bet on a rigged game.
What of companies that spend millions analysing markets before setting their prices? That seems to involve deep modelling, yet is canonically seen as trade.
They usually don’t have any way to leverage their models to increase the cost of not buying their product or service though; so such a situation is still missing at least one criterion.
There is a complication involved since its possible to increase the cost to others of not doing business with you in “fair” ways. E.g. the invention of the fax machine reduced effective demand for message boys to run between office buildings, hence increasing their cost and the operating costs of anyone who refused to buy a fax machine.
Though I don’t believe any company long held a monopoly on the fax market, if a company did establish such a monopoly in order to control prices that again may be construed as extortion.
Modern social networks and messaging networks would seem to be a strong counterexample. Any software with both network effects and intentional lock-in mechanisms, really.
And honestly, calling such products a blend of extortion and trade seems intuitively about right.
To try to get at the extortion / trade distinction a bit better:
Schelling gives us definitions of promises and threats, and also observes there are things that are a blend of the two. The blend is actually fairly common! I expect there’s something analogous with extortion and trade: you can probably come up with pure examples of both, but in practice a lot of examples will be a blend. And a lot of the ‘things we want to allow’ will look like ‘mostly trade with a dash of extortion’ or ‘mostly trade but both sides also seem to be doing some extortion’.
The cost of not buying is not the same thing as the cost of switching.