How is offering to supply ice cream characterized as “extortion”?
In any case, I was not using the scenario as a reductio against universal unreciprocated altruism. That notion fails under its own weight, due to complete absence of support.
Sorry, I misread your comment and thought it was an extortion scenario similar to the OP. Now that I’ve read it more carefully, it’s not clear to me that we shouldn’t give up the Niobium in order to provide those humans workers with ice cream. (ETA: why did you characterize those humans as indentured workers? It would have worked as well if they were just ordinary salaried workers.)
That notion fails under its own weight, due to complete absence of support.
Altruists certainly claim to have support for their stated preferences. Or one could argue that preferences don’t need to have support. What kind of support do you have for liking ice cream, for example?
True enough. My main objection to calling my ice cream negotiating tactic ‘extortion’ is that I really don’t like the “just say ‘No’ to extortion” heuristic. I see no way of definitionally distinguishing extortion from other, less objectionable negotiating stances. Nash’s 1953 cooperative game theory model suggests that it is rational to yield to credible threats. I.e. saying ‘no’ to extortion doesn’t win! An AI that begins with the “just say no” heuristic will self-modify to one that dispenses with that heuristic.
I really don’t like the “just say ‘No’ to extortion” heuristic.
Well you don’t want to signal that you give in to extortion. That would just increase the chances of people attempting extortion against you. Better to signal that you are on a vendetta to stamp out extortion—at your personal expense!!!
There is an idea, surprisingly prevalent on a rationality website, that costless signaling is an effective way to influence the behavior of rational agents. Or in other words, that it is rational to take signalling at face value. I personally doubt that this idea is correct. In any case, I reiterate that I suggest yielding only to credible threats. My own announcements do not change the credibility of any threats available to agents seeking to exploit me.
Maybe if you provided examples of people seeming to say that “costless signaling is an effective way to influence the behavior of rational agents,” we could ask them what they meant, and they might say something like “no signaling is actually costless”.
Statements like “Someone going on record as having opinion X has given decent reason to suppose that person (believes he or she) actually holds opinion X,” are interpretable of having either of the two meanings above. Since you didn’t provide examples, I wasn’t persuaded that you are describing people’s ideas, and I suspect ambiguous statements like that are behind our clash of intuitions about what people think.
Maybe if you provided examples of people seeming to say that “costless signaling is an effective way to influence the behavior of rational agents,” we could ask them what they meant, and they might say something like “no signaling is actually costless”.
Ok. That makes some sense. Though I still don’t have a clue as to why you mention “social costs” or “pseudonymous posting”.
So, for the example of people seeming to say that costless signaling is an effective way to influence the behavior of rational agents, I would direct you to the comment to which I was replying. Tim wrote:
Well you don’t want to signal that you give in to extortion. That would just increase the chances of people attempting extortion against you. Better to signal that you are on a vendetta to stamp out extortion—at your personal expense!!!
I interpreted that as advocating costless signaling as a way of influencing the behavior of would-be extortionists. My response to that advocacy: Announcing that I am on a vendetta is cheap talk, and influences no one. No rational agent will believe such self-serving puffery unless I actually experience a level of personal expense commensurate with what I hope to gain by convincing them. Which makes the signaling not just costly, but irrational.
So, for the example of people seeming to say that costless signaling is an effective way to influence the behavior of rational agents, I would direct you to the comment to which I was replying. Tim wrote:
Well you don’t want to signal that you give in to extortion. That would just increase the chances of people attempting extortion against you. Better to signal that you are on a vendetta to stamp out extortion—at your personal expense!!!
I interpreted that as advocating costless signaling as a way of influencing the behavior of would-be extortionists.
You seem to be the only one talking about “costless signaling” here.
I think the hidden cost is that if the signaler is called on the bluff, the signaler will be shown to not be fully committed to his or her pronouncements (and it will be reasonable to infer a good deal more flexibility in them than that).
Generally I think if someone has an intuition a case of apparently costless signaling would be valuable, his or her intuition is usually correct, but the intellect hasn’t found the cost of the signal yet. The intellect’s claim that only signaling that has costs is valuable remains accurate, as you say.
Which makes the signaling not just costly, but irrational.
It seems like its irrationality would be made contingent on some variables, so it would sometimes actually be rational costly signalling. Following through on a costly commitment clearly has costs, but why assume benefits to reputation aren’t greater?
If you say “I will be careful not to betray lessdazed so long as his costly seeking revenge would be worth it for his reputation,” you run into the paradox that such cases might not exist any more than one “[t]he smallest positive integer not definable in under eleven words” exists (Berry’s Paradox). So long as my actions are best interpretable as being of negative utility, they get +3 stacking bonus to utility. Of course, I then run into the paradox because with the bonus I no longer qualify for the bonus!
A well made, RPG would state whether or not the bonus counts towards calculating whether or not one qualifies for it, but Azathoth is a blind idiot god, and for all its advanced graphics and immersive gameplay, RL is not a well made RPG.
My own announcements do not change the credibility of any threats available to agents seeking to exploit me.
They inflluence the liklihood of them being made in the first place—by influencing the attacker’s expected payoffs. Especially if it appears as though you were being sincere. Your comment didn’t look much like signalling. I mean, it doesn’t seem terribly likely that someone would deliberately publicly signal that they are more likely than unnamed others to capitulate if threatened with an attempt at extortion.
Credibly signalling resistance to extortion is non-trivial. Most compelling would be some kind of authenticated public track record of active resistance.
I don’t think anybody is suggesting building an explicit “just say ‘No’ to extortion” heuristic into an AI. (I agree we do not have a good definition of “extortion” so when I use the word I use it in an intuitive sense.) We’re trying to find a general decision theory that naturally ends up saying no to extortion (when it makes sense to).
Here’s an argument that “saying ‘no’ to extortion doesn’t win” can’t be the full picture. Some people are more credibly resistant to extortion than others and as a result are less likely to be extorted. We want an AI that is credibly resistant to extortion, if such credibility is possible. Now if other players in the picture are intelligent enough, to the extent of being able to deduce our AI’s decision algorithm, then isn’t being “credibly resistant to extortion” the same as having a decision algorithm that actually says no to extortion?
ETA: Of course the concept of “credibility” breaks down a bit when all agents are reasoning this way. Which is why the problem is still unsolved!
I don’t think anybody is suggesting building an explicit “just say ‘No’ to extortion” heuristic into an AI. (I agree we do not have a good definition of “extortion” so when I use the word I use it in an intuitive sense.) We’re trying to find a general decision theory that naturally ends up saying no to extortion (when it makes sense to).
That is pretty incoherent. If you are trying to come up with a general decision theory that wins and also says no to extortion, then you have overdetermined the problem (or will overdetermine it once you supply your definition). If you are predicting that a decision theory that wins will say no to extortion, then it is a rather pointless claim until you supply a definition. Perhaps what you really intend to do is to define ‘extortion’ as ‘that which a winning decision theory says no to’. In which case, Nash has defined ‘extortion’ for you—as a threat which is not credible, in his technical sense.
ETA: Of course the [informal] concept of “credibility” breaks down a bit when all agents are reasoning this way. Which is why the problem is still unsolved!
Why do you say the problem is still unsolved? What issues do you feel were not addressed by Nash in 1953? Where is the flaw in his argument?
Part of the difficulty of discussing this here is that you have now started to use the word “credible” informally, when it also has a technical meaning in this context.
Well, the key concept underlying strong resistance to extortion is reputation management. Once you understand the long-term costs of becoming identified as a vulnerable “mark” by those in the criminal underground, giving in to extortion can start to look a lot less attractive.
Tim, we are completely talking past each other here. To restate my position:
Nash in 1953 characterized rational 2 party bargaining with threats. Part of the solution was to make the quantitative distinction between ‘non-credible’ threats (which should be ignored because they cost the threatener so much to carry out that he would be irrational to do so), and ‘credible’ threats—threats which a threatener might rationally commit to carry out.
Since Nash is modeling the rationality of both parties here, it is irrational to resist a credible threat—in fact, to promise to do so is to make a non-credible threat yourself.
Hence, in Nash’s model, cost-less signaling is pointless if both players are assumed to be rational. Such signaling does not change the dividing line between threats that are credible, and rationally should succeed, and those which are non-rational and should fail.
As for the ‘costly signalling’ that takes place when non-credible threats are resisted—that is already built into the model. And a consequence of the model is that it is a net loss to attempt to resist threats that are credible.
All of this is made very clear in any good textbook on game theory. It would save us all a great deal of time if you keep your amateur political theorizing to yourself until you read those textbooks.
I am kinda surprised that you are in such a muddle about this—and are willing to patronise me over the issue!
“Don’t negotiate with terrorists” and “don’t give into extortion” are well-known maxims. As this thread illustrates, you don’t seem to understand why they exist. I do understand. It isn’t terribly complicated. I expect I can explain it to you.
If a government gives in to terrorist demands during a hijacking, it sends a signal to all the other terrorists in the world that the government is vulnerable to extortion. Subsequently the government is likely to face more hijackings.
So… in addition to the obvious cost associated with the immediate demands of the terrorists, there is a hidden cost associated with gaining a reputation for giving in to terrorists. That hidden cost is often huge. Thus the strategy of not giving in to terrorist demands—even if doing so looks attractive on the basis of a naive cost-benefit analysis.
Other forms of extortion exhibit similar dynamics...
So, in addition to the obvious cost associated with the immediate demands of the terrorists, there is a hidden cost associated with getting a reputation for giving in to terrorists. That hidden cost is often huge. Thus the strategy of not giving in to terrorists.
So if Thud cooperated with some less drastic version of Fred’s plan that left a future to care about, he would be causing humans to get a reputation for giving in to extortion, even if the particular extortion he was faced with would not have been prevented by the aliens knowing he probably would not have given in. This is a different argument from the backward causality UDT seems to use in this situation, and AIXI could get it right by simulating the behavior of the next extortionist.
I’ll give you utility if you give me utility is a trade.
I won’t cause you disutility if you give me utility is extortion.
I don’t think that’s exactly the right distinction. Let’s say you go to your neighbour because he’s being noisy.
Scenario A: He says “I didn’t mean to disturb you, I just love my music loud. But give me 10 dollars, and sure, I’ll turn the volume down.” I’d call that a trade, though it’s still about him not giving you disutility.
Scenario B: He says “Yeah, I do that on purpose, so that I can make people pay me to turn the volume down. It’ll be 10 bucks. ” I’d call that an extortion.
The difference isn’t between the results of the offer if you accept or reject—the outcomes and their utility for you is the same in each (loud music, silence − 10 dollars).
The difference is that in Scenario B, you wish the other person had never decided to make this offer. It’s not the utility of your options that are to be compared with each other, but the utility of the timeline where the trade can be made vs the utility of the timeline where the trade can’t be made…
In the Trade scenarios, if you can’t make a trade with the person, he’s still being noisy, and utility minimizes.
In the Extortion scenarios, if you can’t make a trade with the person, he has no reason to be noisy, and utility maximizes.
I’ll probably let someone else to transform the above description into equations containing utility functions.
The more important part for extortion is that they threaten to go out of their way to cause you harm. Schelling points and default states are probably relevant for the distinction.
You can’t read a payoff table and declare it extortion or trade.
Schelling points and default states are probably relevant for the distinction.
Meh. I hope we can define extortion much simpler than that.
How about “Extortion: Any offer of trade (t) by A to B, where A knows that the likely utility of B would be maximized if A had in advance treated (t) as certainly rejected.”
In short extortion is any offer to you in which you could rationally wish you had clearly precommitted to reject it (and signalled such precommitment effectively), and A knows that.
How about “Extortion: Any offer of trade (t) by A to B, where A knows that the likely utility of B would be maximized if A had in advance treated (t) as certainly rejected.”
In short extortion is any offer to you in which you could rationally wish you had clearly precommitted to reject it (and signalled such precommitment effectively), and A knows that.
Another example.
A and B share an apartment, and so far A has been doing all that household chores even though both A and B care almost equally about a clean house. (Maybe A cares slightly more, so that A’s cleanliness threshold is always reached slightly before B’s threshold, so that A ends up doing the chore every time.)
So one day A gives B an ultimatum: if they do not share household chores equally, A will simply go on strike.
B realizes, too late, that B should have effectively and convincingly pre-committed earlier to never doing household chores, since this way A would never be tempted to offer the ultimatum.
A is aware of all this and breathes a sigh of relief that he made his ultimatum before B made that pre-commitment.
I’m almost convinced my definition is faulty, but not completely yet. In this case, if the offer was sure to be rejected, Alice (A) may move out, or evict Bob (B), or react in a different way that minimizes Bob’s utility, or Alice may just decide to stop chores anyway because she just prefers a messy and just household than a clean and injust one.
So precommitment to reject the offer doesn’t necessarily help Bob. But I need to think about this. Upvoting both examples.
How about “Extortion: Any offer of trade (t) by A to B, where A knows that the likely utility of B would be maximized if A had in advance treated (t) as certainly rejected.”
In short extortion is any offer to you in which you could rationally wish you had clearly precommitted to reject it (and signalled such precommitment effectively), and A knows that.
B is threatening to kill his hostage unless a million dollars is deposited in B’s offshore account and B safely arrives outside of legal jurisdiction.
A tells B that if B kills the hostage then A will kill B, but if B lets the hostage go then, in trade, A will not kill B.
B realizes, too late, that B should have set things up so that the hostage would automatically be killed if B didn’t get what he wanted even if B got cold feet late in the game (this could be done by employing a third party whose professional reputation rests on doing as he is initially instructed regardless of later instructions). This would have greatly strengthened B’s bargaining position.
A is aware of all this and breathes a sigh of relief that B did not have sufficient foresight.
Is A an extortionist? He is by the above definition.
A’s actions read like textbook extortion to me, albeit for a good cause. About the only way I can think of to disqualify them would be to impose the requirement that extortion has to be aimed at procuring resources—which might be consistent with its usual sense, but seems pretty tortured.
A is walking down the street minding their own business carrying a purse. B wants what’s in the purse but is afraid that if B tries to snatch the purse, A might cause trouble for B (such as by scratching and kicking B and calling for help). It is implicit in this situation that if B does not bother A, then, in trade, A will not cause trouble for B.
B realizes, too late, that B should have worn something really scary to signal to A that B was committed to being bad, very bad, so that neither kicking or scratching nor calling for help would be likely to be of any use to A. This would have strengthened B’s bargaining position.
A, not being an idiot, is aware of this as general fact about people, including about B, and breathes a sigh of relief that there aren’t any scary-looking people in sight.
Is A an extortionist? Is A continually extorting good behavior from everyone around A, by being the sort of person who would kick and scratch and call for help if somebody tried to snatch A’s purse, provided that the purse snatcher had not effectively signalled a pre-commitment to snatch the purse regardless of A’s response? A is implicitly extending an offer to everyone, “don’t try to take my purse and, in trade, I won’t kick and scratch and call for help.” A purse snatcher who effectively signals a pre-commitment to reject that offer (and thus to take the purse despite kicking and scratching and calling for help) places themselves in a stronger position in the implicit negotiation.
This seems to follow all the rules of the offered definition of extortion, i.e.:
How about “Extortion: Any offer of trade (t) by A to B, where A knows that the likely utility of B would be maximized if A had in advance treated (t) as certainly rejected.”
In short extortion is any offer to you in which you could rationally wish you had clearly precommitted to reject it (and signalled such precommitment effectively), and A knows that.
Hmm. Interesting edge case, but I think the fact the second extortion is retaliation aimed to disarm the first one with proportional retribution prevents our moral intuition from packaging it under the same label as “extortion”.
If A threatened to kill in retaliation B’s mother, or B’s child, or B’s whole village—then I don’t think we would have trouble seeing both of them as extortionists.
Or this scenario:
A: Give me a dollar or I punch you on the nose
B: Withdraw that threat, or I kill your goldfish.
A: Withdraw that threat, or I kill your mother
B: Withdraw that threat, or I genocide your people.
A: Withdraw that threat, or I destroy the universe.
B: Here, have a dollar.
Still, perhaps we can refine the definition further.
I offer a variant on the hostage negotiator here. In this variant, the hostage negotiator is replaced by somebody with a purse, and the hostage taker is replaced by a purse snatcher.
As a point of comparison to the purse snatching scenario, consider the following toy-getting scenario:
Whenever a certain parent takes a certain child shopping, the child throws a tantrum unless the child gets a toy. To map this to the purse snatching scenario (and to the other scenarios), the child is A and the parent is B. If the parent convincingly signals a pre-commitment not to get the child a toy, then the child will not bother throwing a tantrum, realizing that it would be futile. If the parent fails to convincingly signal such a pre-commitment, then the child may see an opportunity to get a toy by throwing a tantrum until he gets a toy. The child throwing the tantrum is in effect offering the parent the following trade: get me a toy, and I will stop throwing a tantrum. On future shopping trips, the child implicitly offers the parent the following trade: get me a toy, and I will refrain from throwing a tantrum.
I would call the child an extortionist but I would not call the person with a purse an extortionist, and the main difference I see is that the child is using the threat of trouble to obtain something which was not already their right to have, while the person with the purse is using the threat of trouble to retain something which is their right to keep.
And what is the distinction between giving utility and not giving disutility? As consequentialists, I thought we were committed to the understanding that they are the same thing.
You seem to be assuming that committing to ‘not giving in to extortion’ will be effective in preventing rational threats from being made and carried out. Why do you assume that? Or, if you are not making that assumption, then how can you claim that you are not also turning down possibly beneficial trades?
You seem to be assuming that committing to ‘not giving in to extortion’ will be effective in preventing rational threats from being made and carried out. Why do you assume that?
Because then you don’t get a reputation in the criminal underground for being vulnerable to extortion—and so don’t face a circlling pack of extortionists, each eager for a piece of you.
I see no way of definitionally distinguishing extortion from other, less objectionable negotiating stances.
Well, a simple way would be to use the legal definition of extortion. That should at least help prevent house fires, kidnapping, broken windows and violence.
...but a better definition should not be too difficult—for instance: the set of “offers” which you would rather not be presented with.
What kind of support do you have for liking ice cream, for example?
None at all. But then I don’t claim that it is a universal moral imperative that will be revealed to be ‘my own imperative’ once my brain is scanned, the results of the scan are extrapolated, and the results are weighted in accordance with how “muddled” my preferences are judged to be.
I see, so you’re saying that universal unreciprocated altruism fails as a universal moral imperative, not necessarily as a morality that some people might have. Given that you used the word “crazy” earlier I thought you were claiming that nobody should have that morality.
I think it is easily possible to imagine naturalists describing some kinds of maladaptive behaviour as being “crazy”. The implication would be that the behaviour was being caused by some kind of psychological problem interfering with their brain’s normal operation.
I thought you were claiming that nobody should have that morality.
I do claim that. In two flavors.
Someone operating under that moral maxim will tend to dispense with that maxim as they approach reflective equilibrium.
Someone operating under that ‘moral’ maxim is acting immorally—this operationally means that good people should (i.e. are under a moral obligation to) shun such a moral idiot and make no agreements with him (since he proclaims that he cannot be trusted to keep his commitments).
Part of the confusion between us is that you seem to want the word ‘morality’ to encompass all preferences—whether a preference for chocolate over vanilla, or a preference for telling the truth over lying, or a preference for altruism over selfishness. It is the primary business of metaethics to make the distinction between moral opinions (i.e. opinions about moral issues) and mere personal preferences.
Part of the confusion between us is that you seem to want the word ‘morality’ to encompass all preferences—whether a preference for chocolate over vanilla, or a preference for telling the truth over lying, or a preference for altruism over selfishness.
No, I don’t want that. In fact I do not currently have a metaethical position beyond finding all existing metaethical theories (that I’m aware of) to be inadequate. In my earlier comment I offered two possible lines of defense for altruism, because I didn’t know which metaethics you prefer:
Altruists certainly claim to have support for their stated preferences. Or one could argue that preferences don’t need to have support.
In your reply to that comment you chose to respond to only the second sentence, hence the “confusion”.
Anyway, why don’t you make a post detailing your metaethics, as well as your arguments against “universal unreciprocated altruism”? It’s not clear to me what you’re trying to accomplish by calling people who believe such things (many of whom are very smart and have already seriously reflected on these issues) “crazy” without backing up your claims.
It’s not clear to me what you’re trying to accomplish by calling people who believe such things (many of whom are very smart and have already seriously reflected on these issues) “crazy” without backing up your claims.
I’m not sure why you think I have called anyone crazy. What I said above is that a particular moral notion is crazy.
Perhaps you instead meant to complain that (in the grandparent) I had referred to the persons in question as “moral idiots”. I’m afraid I must plead guilty to that bit of hyperbole.
Anyway, why don’t you make a post detailing your metaethics, as well as your arguments against “universal unreciprocated altruism”?
I am gradually coming to think that there is little agreement here as to what the word metaethics even means. My current understanding is that metaethics is what you do to prepare the linguistic ground so that people operating under different ethical theories and doctrines can talk to each other. Meta-ethics strives to be neutral and non-normative. There are no meta-ethical facts about the world—only definitions that permit discourse and disputation about the facts.
Given this interpretation of “meta-ethics”, it would seem that what you mean to suggest is that I make a post detailing my normative ethics, which would include an argument against “universal unreciprocated altruism” (which I take to be a competing theory of normative ethics).
Luke and/or Eliezer and/or any trained philosopher here: I would appreciate feedback as to whether I finally have the correct understanding of the scope and purpose of meta-ethics.
Given this interpretation of “meta-ethics”, it would seem that what you mean to suggest is that I make a post detailing my normative ethics, which would include an argument against “universal unreciprocated altruism” (which I take to be a competing theory of normative ethics).
I thought you might have certain metaethical views, which might be important for understanding your normative ethics. But yes, I’m mainly interested in hearing about your normative ethics.
How is offering to supply ice cream characterized as “extortion”?
In any case, I was not using the scenario as a reductio against universal unreciprocated altruism. That notion fails under its own weight, due to complete absence of support.
Sorry, I misread your comment and thought it was an extortion scenario similar to the OP. Now that I’ve read it more carefully, it’s not clear to me that we shouldn’t give up the Niobium in order to provide those humans workers with ice cream. (ETA: why did you characterize those humans as indentured workers? It would have worked as well if they were just ordinary salaried workers.)
Altruists certainly claim to have support for their stated preferences. Or one could argue that preferences don’t need to have support. What kind of support do you have for liking ice cream, for example?
Your reading wasn’t far off: “in all of these thought experiments” makes your reply remain relevant.
True enough. My main objection to calling my ice cream negotiating tactic ‘extortion’ is that I really don’t like the “just say ‘No’ to extortion” heuristic. I see no way of definitionally distinguishing extortion from other, less objectionable negotiating stances. Nash’s 1953 cooperative game theory model suggests that it is rational to yield to credible threats. I.e. saying ‘no’ to extortion doesn’t win! An AI that begins with the “just say no” heuristic will self-modify to one that dispenses with that heuristic.
Well you don’t want to signal that you give in to extortion. That would just increase the chances of people attempting extortion against you. Better to signal that you are on a vendetta to stamp out extortion—at your personal expense!!!
There is an idea, surprisingly prevalent on a rationality website, that costless signaling is an effective way to influence the behavior of rational agents. Or in other words, that it is rational to take signalling at face value. I personally doubt that this idea is correct. In any case, I reiterate that I suggest yielding only to credible threats. My own announcements do not change the credibility of any threats available to agents seeking to exploit me.
Perhaps what is really being expressed is the belief that social costs are real, and that mere pseudonymous posting has costs.
huh?????
Maybe if you provided examples of people seeming to say that “costless signaling is an effective way to influence the behavior of rational agents,” we could ask them what they meant, and they might say something like “no signaling is actually costless”.
Statements like “Someone going on record as having opinion X has given decent reason to suppose that person (believes he or she) actually holds opinion X,” are interpretable of having either of the two meanings above. Since you didn’t provide examples, I wasn’t persuaded that you are describing people’s ideas, and I suspect ambiguous statements like that are behind our clash of intuitions about what people think.
Ok. That makes some sense. Though I still don’t have a clue as to why you mention “social costs” or “pseudonymous posting”.
So, for the example of people seeming to say that costless signaling is an effective way to influence the behavior of rational agents, I would direct you to the comment to which I was replying. Tim wrote:
I interpreted that as advocating costless signaling as a way of influencing the behavior of would-be extortionists. My response to that advocacy: Announcing that I am on a vendetta is cheap talk, and influences no one. No rational agent will believe such self-serving puffery unless I actually experience a level of personal expense commensurate with what I hope to gain by convincing them. Which makes the signaling not just costly, but irrational.
You seem to be the only one talking about “costless signaling” here.
I think the hidden cost is that if the signaler is called on the bluff, the signaler will be shown to not be fully committed to his or her pronouncements (and it will be reasonable to infer a good deal more flexibility in them than that).
Generally I think if someone has an intuition a case of apparently costless signaling would be valuable, his or her intuition is usually correct, but the intellect hasn’t found the cost of the signal yet. The intellect’s claim that only signaling that has costs is valuable remains accurate, as you say.
It seems like its irrationality would be made contingent on some variables, so it would sometimes actually be rational costly signalling. Following through on a costly commitment clearly has costs, but why assume benefits to reputation aren’t greater?
If you say “I will be careful not to betray lessdazed so long as his costly seeking revenge would be worth it for his reputation,” you run into the paradox that such cases might not exist any more than one “[t]he smallest positive integer not definable in under eleven words” exists (Berry’s Paradox). So long as my actions are best interpretable as being of negative utility, they get +3 stacking bonus to utility. Of course, I then run into the paradox because with the bonus I no longer qualify for the bonus!
A well made, RPG would state whether or not the bonus counts towards calculating whether or not one qualifies for it, but Azathoth is a blind idiot god, and for all its advanced graphics and immersive gameplay, RL is not a well made RPG.
They inflluence the liklihood of them being made in the first place—by influencing the attacker’s expected payoffs. Especially if it appears as though you were being sincere. Your comment didn’t look much like signalling. I mean, it doesn’t seem terribly likely that someone would deliberately publicly signal that they are more likely than unnamed others to capitulate if threatened with an attempt at extortion.
Credibly signalling resistance to extortion is non-trivial. Most compelling would be some kind of authenticated public track record of active resistance.
I don’t think anybody is suggesting building an explicit “just say ‘No’ to extortion” heuristic into an AI. (I agree we do not have a good definition of “extortion” so when I use the word I use it in an intuitive sense.) We’re trying to find a general decision theory that naturally ends up saying no to extortion (when it makes sense to).
Here’s an argument that “saying ‘no’ to extortion doesn’t win” can’t be the full picture. Some people are more credibly resistant to extortion than others and as a result are less likely to be extorted. We want an AI that is credibly resistant to extortion, if such credibility is possible. Now if other players in the picture are intelligent enough, to the extent of being able to deduce our AI’s decision algorithm, then isn’t being “credibly resistant to extortion” the same as having a decision algorithm that actually says no to extortion?
ETA: Of course the concept of “credibility” breaks down a bit when all agents are reasoning this way. Which is why the problem is still unsolved!
It does what? How so?
“Commit to just saying ‘no’ and proving that when just committing to just saying ‘no’ and proving that wins.”
Perhaps something like that.
That is pretty incoherent. If you are trying to come up with a general decision theory that wins and also says no to extortion, then you have overdetermined the problem (or will overdetermine it once you supply your definition). If you are predicting that a decision theory that wins will say no to extortion, then it is a rather pointless claim until you supply a definition. Perhaps what you really intend to do is to define ‘extortion’ as ‘that which a winning decision theory says no to’. In which case, Nash has defined ‘extortion’ for you—as a threat which is not credible, in his technical sense.
Why do you say the problem is still unsolved? What issues do you feel were not addressed by Nash in 1953? Where is the flaw in his argument?
Part of the difficulty of discussing this here is that you have now started to use the word “credible” informally, when it also has a technical meaning in this context.
My objection to calling the ice cream negotiation tactic ‘extortion’ is it just totally isn’t. It’s an offer of a trade.
Then it’s a good thing we’ve made developments in our models in the last six decades!
Cute. But perhaps you should provide a link to what you think is the relevant development.
Well, the key concept underlying strong resistance to extortion is reputation management. Once you understand the long-term costs of becoming identified as a vulnerable “mark” by those in the criminal underground, giving in to extortion can start to look a lot less attractive.
Tim, we are completely talking past each other here. To restate my position:
Nash in 1953 characterized rational 2 party bargaining with threats. Part of the solution was to make the quantitative distinction between ‘non-credible’ threats (which should be ignored because they cost the threatener so much to carry out that he would be irrational to do so), and ‘credible’ threats—threats which a threatener might rationally commit to carry out.
Since Nash is modeling the rationality of both parties here, it is irrational to resist a credible threat—in fact, to promise to do so is to make a non-credible threat yourself.
Hence, in Nash’s model, cost-less signaling is pointless if both players are assumed to be rational. Such signaling does not change the dividing line between threats that are credible, and rationally should succeed, and those which are non-rational and should fail.
As for the ‘costly signalling’ that takes place when non-credible threats are resisted—that is already built into the model. And a consequence of the model is that it is a net loss to attempt to resist threats that are credible.
All of this is made very clear in any good textbook on game theory. It would save us all a great deal of time if you keep your amateur political theorizing to yourself until you read those textbooks.
I am kinda surprised that you are in such a muddle about this—and are willing to patronise me over the issue!
“Don’t negotiate with terrorists” and “don’t give into extortion” are well-known maxims. As this thread illustrates, you don’t seem to understand why they exist. I do understand. It isn’t terribly complicated. I expect I can explain it to you.
If a government gives in to terrorist demands during a hijacking, it sends a signal to all the other terrorists in the world that the government is vulnerable to extortion. Subsequently the government is likely to face more hijackings.
So… in addition to the obvious cost associated with the immediate demands of the terrorists, there is a hidden cost associated with gaining a reputation for giving in to terrorists. That hidden cost is often huge. Thus the strategy of not giving in to terrorist demands—even if doing so looks attractive on the basis of a naive cost-benefit analysis.
Other forms of extortion exhibit similar dynamics...
So if Thud cooperated with some less drastic version of Fred’s plan that left a future to care about, he would be causing humans to get a reputation for giving in to extortion, even if the particular extortion he was faced with would not have been prevented by the aliens knowing he probably would not have given in. This is a different argument from the backward causality UDT seems to use in this situation, and AIXI could get it right by simulating the behavior of the next extortionist.
Good idea. Thanks for posting.
To elaborate a bit:
I’ll give you utility if you give me utility is a trade.
I won’t cause you disutility if you give me utility is extortion.
I don’t think that’s exactly the right distinction. Let’s say you go to your neighbour because he’s being noisy.
Scenario A: He says “I didn’t mean to disturb you, I just love my music loud. But give me 10 dollars, and sure, I’ll turn the volume down.” I’d call that a trade, though it’s still about him not giving you disutility.
Scenario B: He says “Yeah, I do that on purpose, so that I can make people pay me to turn the volume down. It’ll be 10 bucks. ” I’d call that an extortion.
The difference isn’t between the results of the offer if you accept or reject—the outcomes and their utility for you is the same in each (loud music, silence − 10 dollars).
The difference is that in Scenario B, you wish the other person had never decided to make this offer. It’s not the utility of your options that are to be compared with each other, but the utility of the timeline where the trade can be made vs the utility of the timeline where the trade can’t be made…
In the Trade scenarios, if you can’t make a trade with the person, he’s still being noisy, and utility minimizes. In the Extortion scenarios, if you can’t make a trade with the person, he has no reason to be noisy, and utility maximizes.
I’ll probably let someone else to transform the above description into equations containing utility functions.
Yeah, I was being sloppy.
The more important part for extortion is that they threaten to go out of their way to cause you harm. Schelling points and default states are probably relevant for the distinction.
You can’t read a payoff table and declare it extortion or trade.
Meh. I hope we can define extortion much simpler than that.
How about “Extortion: Any offer of trade (t) by A to B, where A knows that the likely utility of B would be maximized if A had in advance treated (t) as certainly rejected.”
In short extortion is any offer to you in which you could rationally wish you had clearly precommitted to reject it (and signalled such precommitment effectively), and A knows that.
Another example.
A and B share an apartment, and so far A has been doing all that household chores even though both A and B care almost equally about a clean house. (Maybe A cares slightly more, so that A’s cleanliness threshold is always reached slightly before B’s threshold, so that A ends up doing the chore every time.)
So one day A gives B an ultimatum: if they do not share household chores equally, A will simply go on strike.
B realizes, too late, that B should have effectively and convincingly pre-committed earlier to never doing household chores, since this way A would never be tempted to offer the ultimatum.
A is aware of all this and breathes a sigh of relief that he made his ultimatum before B made that pre-commitment.
By the above definition, A is an extortionist.
I’m almost convinced my definition is faulty, but not completely yet. In this case, if the offer was sure to be rejected, Alice (A) may move out, or evict Bob (B), or react in a different way that minimizes Bob’s utility, or Alice may just decide to stop chores anyway because she just prefers a messy and just household than a clean and injust one.
So precommitment to reject the offer doesn’t necessarily help Bob. But I need to think about this. Upvoting both examples.
B is threatening to kill his hostage unless a million dollars is deposited in B’s offshore account and B safely arrives outside of legal jurisdiction.
A tells B that if B kills the hostage then A will kill B, but if B lets the hostage go then, in trade, A will not kill B.
B realizes, too late, that B should have set things up so that the hostage would automatically be killed if B didn’t get what he wanted even if B got cold feet late in the game (this could be done by employing a third party whose professional reputation rests on doing as he is initially instructed regardless of later instructions). This would have greatly strengthened B’s bargaining position.
A is aware of all this and breathes a sigh of relief that B did not have sufficient foresight.
Is A an extortionist? He is by the above definition.
A’s actions read like textbook extortion to me, albeit for a good cause. About the only way I can think of to disqualify them would be to impose the requirement that extortion has to be aimed at procuring resources—which might be consistent with its usual sense, but seems pretty tortured.
A is walking down the street minding their own business carrying a purse. B wants what’s in the purse but is afraid that if B tries to snatch the purse, A might cause trouble for B (such as by scratching and kicking B and calling for help). It is implicit in this situation that if B does not bother A, then, in trade, A will not cause trouble for B.
B realizes, too late, that B should have worn something really scary to signal to A that B was committed to being bad, very bad, so that neither kicking or scratching nor calling for help would be likely to be of any use to A. This would have strengthened B’s bargaining position.
A, not being an idiot, is aware of this as general fact about people, including about B, and breathes a sigh of relief that there aren’t any scary-looking people in sight.
Is A an extortionist? Is A continually extorting good behavior from everyone around A, by being the sort of person who would kick and scratch and call for help if somebody tried to snatch A’s purse, provided that the purse snatcher had not effectively signalled a pre-commitment to snatch the purse regardless of A’s response? A is implicitly extending an offer to everyone, “don’t try to take my purse and, in trade, I won’t kick and scratch and call for help.” A purse snatcher who effectively signals a pre-commitment to reject that offer (and thus to take the purse despite kicking and scratching and calling for help) places themselves in a stronger position in the implicit negotiation.
This seems to follow all the rules of the offered definition of extortion, i.e.:
Hmm. Interesting edge case, but I think the fact the second extortion is retaliation aimed to disarm the first one with proportional retribution prevents our moral intuition from packaging it under the same label as “extortion”.
If A threatened to kill in retaliation B’s mother, or B’s child, or B’s whole village—then I don’t think we would have trouble seeing both of them as extortionists.
Or this scenario:
Still, perhaps we can refine the definition further.
I offer a variant on the hostage negotiator here. In this variant, the hostage negotiator is replaced by somebody with a purse, and the hostage taker is replaced by a purse snatcher.
As a point of comparison to the purse snatching scenario, consider the following toy-getting scenario:
Whenever a certain parent takes a certain child shopping, the child throws a tantrum unless the child gets a toy. To map this to the purse snatching scenario (and to the other scenarios), the child is A and the parent is B. If the parent convincingly signals a pre-commitment not to get the child a toy, then the child will not bother throwing a tantrum, realizing that it would be futile. If the parent fails to convincingly signal such a pre-commitment, then the child may see an opportunity to get a toy by throwing a tantrum until he gets a toy. The child throwing the tantrum is in effect offering the parent the following trade: get me a toy, and I will stop throwing a tantrum. On future shopping trips, the child implicitly offers the parent the following trade: get me a toy, and I will refrain from throwing a tantrum.
I would call the child an extortionist but I would not call the person with a purse an extortionist, and the main difference I see is that the child is using the threat of trouble to obtain something which was not already their right to have, while the person with the purse is using the threat of trouble to retain something which is their right to keep.
And what is the distinction between giving utility and not giving disutility? As consequentialists, I thought we were committed to the understanding that they are the same thing.
The distinction is that I can commit to not giving into extortion, and not also turn down possibly beneficial trades.
You seem to be assuming that committing to ‘not giving in to extortion’ will be effective in preventing rational threats from being made and carried out. Why do you assume that? Or, if you are not making that assumption, then how can you claim that you are not also turning down possibly beneficial trades?
Because then you don’t get a reputation in the criminal underground for being vulnerable to extortion—and so don’t face a circlling pack of extortionists, each eager for a piece of you.
Well, a simple way would be to use the legal definition of extortion. That should at least help prevent house fires, kidnapping, broken windows and violence.
...but a better definition should not be too difficult—for instance: the set of “offers” which you would rather not be presented with.
None at all. But then I don’t claim that it is a universal moral imperative that will be revealed to be ‘my own imperative’ once my brain is scanned, the results of the scan are extrapolated, and the results are weighted in accordance with how “muddled” my preferences are judged to be.
I see, so you’re saying that universal unreciprocated altruism fails as a universal moral imperative, not necessarily as a morality that some people might have. Given that you used the word “crazy” earlier I thought you were claiming that nobody should have that morality.
I think it is easily possible to imagine naturalists describing some kinds of maladaptive behaviour as being “crazy”. The implication would be that the behaviour was being caused by some kind of psychological problem interfering with their brain’s normal operation.
I do claim that. In two flavors.
Someone operating under that moral maxim will tend to dispense with that maxim as they approach reflective equilibrium.
Someone operating under that ‘moral’ maxim is acting immorally—this operationally means that good people should (i.e. are under a moral obligation to) shun such a moral idiot and make no agreements with him (since he proclaims that he cannot be trusted to keep his commitments).
Part of the confusion between us is that you seem to want the word ‘morality’ to encompass all preferences—whether a preference for chocolate over vanilla, or a preference for telling the truth over lying, or a preference for altruism over selfishness. It is the primary business of metaethics to make the distinction between moral opinions (i.e. opinions about moral issues) and mere personal preferences.
No, I don’t want that. In fact I do not currently have a metaethical position beyond finding all existing metaethical theories (that I’m aware of) to be inadequate. In my earlier comment I offered two possible lines of defense for altruism, because I didn’t know which metaethics you prefer:
In your reply to that comment you chose to respond to only the second sentence, hence the “confusion”.
Anyway, why don’t you make a post detailing your metaethics, as well as your arguments against “universal unreciprocated altruism”? It’s not clear to me what you’re trying to accomplish by calling people who believe such things (many of whom are very smart and have already seriously reflected on these issues) “crazy” without backing up your claims.
I’m not sure why you think I have called anyone crazy. What I said above is that a particular moral notion is crazy.
Perhaps you instead meant to complain that (in the grandparent) I had referred to the persons in question as “moral idiots”. I’m afraid I must plead guilty to that bit of hyperbole.
I am gradually coming to think that there is little agreement here as to what the word metaethics even means. My current understanding is that metaethics is what you do to prepare the linguistic ground so that people operating under different ethical theories and doctrines can talk to each other. Meta-ethics strives to be neutral and non-normative. There are no meta-ethical facts about the world—only definitions that permit discourse and disputation about the facts.
Given this interpretation of “meta-ethics”, it would seem that what you mean to suggest is that I make a post detailing my normative ethics, which would include an argument against “universal unreciprocated altruism” (which I take to be a competing theory of normative ethics).
Luke and/or Eliezer and/or any trained philosopher here: I would appreciate feedback as to whether I finally have the correct understanding of the scope and purpose of meta-ethics.
I thought you might have certain metaethical views, which might be important for understanding your normative ethics. But yes, I’m mainly interested in hearing about your normative ethics.