Blackmailing is a class of situations similar to Counterfactual Mugging, where you are willing to sacrifice utility in the actual world, in order to control its probability into being lower, so that the counterfactual worlds (that have higher utility) will gain as much probability as possible, and will thus improve the overall expected utility, even as utility of the actual world becomes lower.
Or, simply, you are being blackmailed when you wish this wouldn’t be happening, and the correct actions are those that make the reality as improbable as possible.
(In Counterfactual Mugging, you are sacrificing utility in the actual world in order to improve utility of the counterfactual world, while in blackmailing, you are doing the same in order to improve its probability.)
This definition is too broad. It fits the person doing the blackmailing (in a world where you reject my threat, I will act against my local best interest and blow up the bombs), just as well as the person being blackmailed (in a world where you have precommited to bomb me, I will act against my local self-interest and defy you). It fits many type of negotiations over deals and such.
It fits the person doing the blackmailing (in a world where you reject my threat, I will act against my local best interest and blow up the bombs), just as well as the person being blackmailed (in a world where you have precommited to bomb me, I will act against my local self-interest and defy you).
You omit some counterfactuals by framing them as located outside the scope of the game. If you return them, the pattern no longer fits. For example, the blackmailer can decide to not blackmail on both sides of victim’s decision to give in, so the utility of counterfactuals outside the situation where blackmailer decided to blackmail and the victim didn’t give in is still under blackmailer’s control, which it shouldn’t be according to the pattern I proposed.
I don’t quite see your point. If you take a nuclear blackmailer, then it follows the same pattern: he is committing to a locally negative course (blowing up nukes that will doom them both) so that the probability of that world is diminished, and the probability of the world where his victim gives in goes up. How does this not follow your pattern?
You assume causal screening off, but humans think acausally, with no regard for observational impossibility, which is much more apparent in games. If after you’re in the situation of having unsuccessfully blackmailed the other, you can still consider not blackmailing (in particular, if blackmail probably doesn’t work), then you get a decision that changes utility of the collection of counterfactuals outside the current observations, which blackmailed (by my definition) are not granted. The blackmailed have to only be able to manipulate probability of counterfactuals, not their utility. (That’s my guess as to why our brains label this situation “not getting blackmailed”.)
I need examples to get any further in understanding. Can you give a toy model that is certainly blackmail according to your definition, so that I can contrast it with other situations?
Can you give a toy model that is certainly blackmail according to your definition, so that I can contrast it with other situations?
I don’t understand. Simple blackmail is certainly blackmail. The problem here seemed to be with games that are bigger than that, why do you ask about simple blackmail, which you certainly already understood from my first description?
Current guess.
Blackmailing is a class of situations similar to Counterfactual Mugging, where you are willing to sacrifice utility in the actual world, in order to control its probability into being lower, so that the counterfactual worlds (that have higher utility) will gain as much probability as possible, and will thus improve the overall expected utility, even as utility of the actual world becomes lower.
Or, simply, you are being blackmailed when you wish this wouldn’t be happening, and the correct actions are those that make the reality as improbable as possible.
(In Counterfactual Mugging, you are sacrificing utility in the actual world in order to improve utility of the counterfactual world, while in blackmailing, you are doing the same in order to improve its probability.)
This definition is too broad. It fits the person doing the blackmailing (in a world where you reject my threat, I will act against my local best interest and blow up the bombs), just as well as the person being blackmailed (in a world where you have precommited to bomb me, I will act against my local self-interest and defy you). It fits many type of negotiations over deals and such.
You omit some counterfactuals by framing them as located outside the scope of the game. If you return them, the pattern no longer fits. For example, the blackmailer can decide to not blackmail on both sides of victim’s decision to give in, so the utility of counterfactuals outside the situation where blackmailer decided to blackmail and the victim didn’t give in is still under blackmailer’s control, which it shouldn’t be according to the pattern I proposed.
I don’t quite see your point. If you take a nuclear blackmailer, then it follows the same pattern: he is committing to a locally negative course (blowing up nukes that will doom them both) so that the probability of that world is diminished, and the probability of the world where his victim gives in goes up. How does this not follow your pattern?
You assume causal screening off, but humans think acausally, with no regard for observational impossibility, which is much more apparent in games. If after you’re in the situation of having unsuccessfully blackmailed the other, you can still consider not blackmailing (in particular, if blackmail probably doesn’t work), then you get a decision that changes utility of the collection of counterfactuals outside the current observations, which blackmailed (by my definition) are not granted. The blackmailed have to only be able to manipulate probability of counterfactuals, not their utility. (That’s my guess as to why our brains label this situation “not getting blackmailed”.)
I need examples to get any further in understanding. Can you give a toy model that is certainly blackmail according to your definition, so that I can contrast it with other situations?
I don’t understand. Simple blackmail is certainly blackmail. The problem here seemed to be with games that are bigger than that, why do you ask about simple blackmail, which you certainly already understood from my first description?