I don’t quite see your point. If you take a nuclear blackmailer, then it follows the same pattern: he is committing to a locally negative course (blowing up nukes that will doom them both) so that the probability of that world is diminished, and the probability of the world where his victim gives in goes up. How does this not follow your pattern?
You assume causal screening off, but humans think acausally, with no regard for observational impossibility, which is much more apparent in games. If after you’re in the situation of having unsuccessfully blackmailed the other, you can still consider not blackmailing (in particular, if blackmail probably doesn’t work), then you get a decision that changes utility of the collection of counterfactuals outside the current observations, which blackmailed (by my definition) are not granted. The blackmailed have to only be able to manipulate probability of counterfactuals, not their utility. (That’s my guess as to why our brains label this situation “not getting blackmailed”.)
I need examples to get any further in understanding. Can you give a toy model that is certainly blackmail according to your definition, so that I can contrast it with other situations?
Can you give a toy model that is certainly blackmail according to your definition, so that I can contrast it with other situations?
I don’t understand. Simple blackmail is certainly blackmail. The problem here seemed to be with games that are bigger than that, why do you ask about simple blackmail, which you certainly already understood from my first description?
I don’t quite see your point. If you take a nuclear blackmailer, then it follows the same pattern: he is committing to a locally negative course (blowing up nukes that will doom them both) so that the probability of that world is diminished, and the probability of the world where his victim gives in goes up. How does this not follow your pattern?
You assume causal screening off, but humans think acausally, with no regard for observational impossibility, which is much more apparent in games. If after you’re in the situation of having unsuccessfully blackmailed the other, you can still consider not blackmailing (in particular, if blackmail probably doesn’t work), then you get a decision that changes utility of the collection of counterfactuals outside the current observations, which blackmailed (by my definition) are not granted. The blackmailed have to only be able to manipulate probability of counterfactuals, not their utility. (That’s my guess as to why our brains label this situation “not getting blackmailed”.)
I need examples to get any further in understanding. Can you give a toy model that is certainly blackmail according to your definition, so that I can contrast it with other situations?
I don’t understand. Simple blackmail is certainly blackmail. The problem here seemed to be with games that are bigger than that, why do you ask about simple blackmail, which you certainly already understood from my first description?