Well, the key concept underlying strong resistance to extortion is reputation management. Once you understand the long-term costs of becoming identified as a vulnerable “mark” by those in the criminal underground, giving in to extortion can start to look a lot less attractive.
Tim, we are completely talking past each other here. To restate my position:
Nash in 1953 characterized rational 2 party bargaining with threats. Part of the solution was to make the quantitative distinction between ‘non-credible’ threats (which should be ignored because they cost the threatener so much to carry out that he would be irrational to do so), and ‘credible’ threats—threats which a threatener might rationally commit to carry out.
Since Nash is modeling the rationality of both parties here, it is irrational to resist a credible threat—in fact, to promise to do so is to make a non-credible threat yourself.
Hence, in Nash’s model, cost-less signaling is pointless if both players are assumed to be rational. Such signaling does not change the dividing line between threats that are credible, and rationally should succeed, and those which are non-rational and should fail.
As for the ‘costly signalling’ that takes place when non-credible threats are resisted—that is already built into the model. And a consequence of the model is that it is a net loss to attempt to resist threats that are credible.
All of this is made very clear in any good textbook on game theory. It would save us all a great deal of time if you keep your amateur political theorizing to yourself until you read those textbooks.
I am kinda surprised that you are in such a muddle about this—and are willing to patronise me over the issue!
“Don’t negotiate with terrorists” and “don’t give into extortion” are well-known maxims. As this thread illustrates, you don’t seem to understand why they exist. I do understand. It isn’t terribly complicated. I expect I can explain it to you.
If a government gives in to terrorist demands during a hijacking, it sends a signal to all the other terrorists in the world that the government is vulnerable to extortion. Subsequently the government is likely to face more hijackings.
So… in addition to the obvious cost associated with the immediate demands of the terrorists, there is a hidden cost associated with gaining a reputation for giving in to terrorists. That hidden cost is often huge. Thus the strategy of not giving in to terrorist demands—even if doing so looks attractive on the basis of a naive cost-benefit analysis.
Other forms of extortion exhibit similar dynamics...
So, in addition to the obvious cost associated with the immediate demands of the terrorists, there is a hidden cost associated with getting a reputation for giving in to terrorists. That hidden cost is often huge. Thus the strategy of not giving in to terrorists.
So if Thud cooperated with some less drastic version of Fred’s plan that left a future to care about, he would be causing humans to get a reputation for giving in to extortion, even if the particular extortion he was faced with would not have been prevented by the aliens knowing he probably would not have given in. This is a different argument from the backward causality UDT seems to use in this situation, and AIXI could get it right by simulating the behavior of the next extortionist.
Cute. But perhaps you should provide a link to what you think is the relevant development.
Well, the key concept underlying strong resistance to extortion is reputation management. Once you understand the long-term costs of becoming identified as a vulnerable “mark” by those in the criminal underground, giving in to extortion can start to look a lot less attractive.
Tim, we are completely talking past each other here. To restate my position:
Nash in 1953 characterized rational 2 party bargaining with threats. Part of the solution was to make the quantitative distinction between ‘non-credible’ threats (which should be ignored because they cost the threatener so much to carry out that he would be irrational to do so), and ‘credible’ threats—threats which a threatener might rationally commit to carry out.
Since Nash is modeling the rationality of both parties here, it is irrational to resist a credible threat—in fact, to promise to do so is to make a non-credible threat yourself.
Hence, in Nash’s model, cost-less signaling is pointless if both players are assumed to be rational. Such signaling does not change the dividing line between threats that are credible, and rationally should succeed, and those which are non-rational and should fail.
As for the ‘costly signalling’ that takes place when non-credible threats are resisted—that is already built into the model. And a consequence of the model is that it is a net loss to attempt to resist threats that are credible.
All of this is made very clear in any good textbook on game theory. It would save us all a great deal of time if you keep your amateur political theorizing to yourself until you read those textbooks.
I am kinda surprised that you are in such a muddle about this—and are willing to patronise me over the issue!
“Don’t negotiate with terrorists” and “don’t give into extortion” are well-known maxims. As this thread illustrates, you don’t seem to understand why they exist. I do understand. It isn’t terribly complicated. I expect I can explain it to you.
If a government gives in to terrorist demands during a hijacking, it sends a signal to all the other terrorists in the world that the government is vulnerable to extortion. Subsequently the government is likely to face more hijackings.
So… in addition to the obvious cost associated with the immediate demands of the terrorists, there is a hidden cost associated with gaining a reputation for giving in to terrorists. That hidden cost is often huge. Thus the strategy of not giving in to terrorist demands—even if doing so looks attractive on the basis of a naive cost-benefit analysis.
Other forms of extortion exhibit similar dynamics...
So if Thud cooperated with some less drastic version of Fred’s plan that left a future to care about, he would be causing humans to get a reputation for giving in to extortion, even if the particular extortion he was faced with would not have been prevented by the aliens knowing he probably would not have given in. This is a different argument from the backward causality UDT seems to use in this situation, and AIXI could get it right by simulating the behavior of the next extortionist.
Good idea. Thanks for posting.