The person blackmailing you doesn’t have the option of having the FSM grant them $1000
They do have the option of not blackmailing you.
Just because they are blackmailing you doesn’t make them not blackmailing you impossible. If they wanted not to blackmail you, they wouldn’t be blackmailing you.
The whole point of precommitting not to give in to blackmail, and not to negotiate with terrorists, is the fact that they have the option to do nothing, and if you’re not going to give in, they’re better off sticking with that option.
So, you precommit not to give in, and this decreases the chance that you’ll be threatened in the first place.
The person blackmailing you doesn’t have the option of having the FSM grant them $1000
They do have the option of not blackmailing you.
The question is, what’s the difference between the two, formally? Neither actually happened, both are counterfactual. (The assumption is that you are already facing a blackmail attempt, trying to decide whether to give in.)
This refers to a significant surprising conclusion in decision theory (at least, UDT-style): which action is correct depends on how you reason about logically impossible situations, so it’s important to reason about the logically impossible situations correctly. But it’s still not clear where the criteria for correctness of such reasoning should come from.
The question is, what’s the difference between the two, formally?
One is a case where a precommitment makes a difference, the other isn’t.
Had you convincingly precommitted not to giving in to blackmail* you would not have been blackmailed.
Had you convincingly precommitted to getting the FSM to grant your blackmailer $1000, the FSM still wouldn’t exist.
*(which is not an impossible counterfactual+ it’s something that could have happened, with only relatively minor changes to the world.)
+[unless you want to define “impossible” such that anything which doesn’t happen was impossible, at which point it’s not an unpossible counterfactual, and I’m annoyed :p]
which action is correct depends on how you reason about logically impossible situations
A logically impossible situation is one which couldn’t happen in any logically consistent world. There are plenty of logically consistent worlds in which the person blackmailing you instead doesn’t.
So, it’s definitely not logically impossible. You could call it impossible (though, as above, that non-standard usage would irritate me) but it’s not logically impossible.
One is a case where a precommitment makes a difference, the other isn’t.
Obviously, the question is, why, what feature allows you to make that distinction.
Had you convincingly precommitted not to giving in to blackmail* you would not have been blackmailed.
Had you convincingly precommitted to getting the FSM to grant your blackmailer $1000, the FSM still wouldn’t exist.
The open question is how to reason about these situations and know to distinguish them in such reasoning.
A logically impossible situation is one which couldn’t happen in any logically consistent world.
“Worlds” can’t be logically consistent or inconsistent, at least it’s not clear what is the referent of the term “inconsistent world”, other than “no information”.
And again, why would one care about existence of some world where something is possible, if it’s not the world one wants to control? If the definition of what you care about is included, the facts that place a situation in contradiction with that definition make the result inconsistent.
If it’s not about the effect in the actual world, why is it relevant?
If I ask “What will happen if I don’t attempt to increase my rationality” I’m reasoning about counterfactuals.
Is that not about cause and effect in the real world?
Counterfactuals ARE about the actual world. They’re a way of analysing the chains of cause and effect.
If you can’t reason about cause and effect (and with your inability to understand why precomitting can’t bring the FSM into existence, I get the impression you’re having trouble there) you need tools. Counterfactuals are a tool for reasoning about cause and effect.
Yes. But why would you need to? In the positive-sum trade scenario, you’re gaining from the trade, so precommitting to accept it is unnecessary.
If you mean that I could precommit to only accept extremely favourable terms; well if I do that, they’ll choose someone else to trade with; just as the threatener would choose someone else to threaten
Them choosing to trade with someone else is bad for me. The threatener choosing someone else to threaten is good for me.
/\
That is, in many ways, the most important distinction between the scenarios. I want the threatener to pick someone else. I want the trader to pick me.
The question is, what’s the difference between the two, formally? Neither actually happened, both are counterfactual. (The assumption is that you are already facing a blackmail attempt, trying to decide whether to give in.)
The blackmailer has the option of backing down at any point, and letting you go for free. It may be unlikely, but it’s not logically impossible.
“Give me $1000 or I’ll blow up your car!”
“I have a longstanding history of not negotiating with terrorists. In fact, last month someone slashed my tires because I wouldn’t give them $20. Check the police blotter if you don’t believe me.”
“Oh, alright. I’ll just take my bomb and go hassle someone more tractable.”
It may be unlikely, but it’s not logically impossible.
Assume it is, as part of the problem statement. Only allow agent-consistency (the agent can’t prove otherwise) of it being possible for the other player to not blackmail, without allowing actual logical consistency of that event. Also, assume that our agent has actually observed that the other decided to blackmail, and there is no possibility of causal negotiation.
(This helps to remove the wiggle-room in foggy reasoning about decision-making.)
The person blackmailing you doesn’t have the option of having the FSM grant them $1000
They do have the option of not blackmailing you.
Just because they are blackmailing you doesn’t make them not blackmailing you impossible. If they wanted not to blackmail you, they wouldn’t be blackmailing you.
The whole point of precommitting not to give in to blackmail, and not to negotiate with terrorists, is the fact that they have the option to do nothing, and if you’re not going to give in, they’re better off sticking with that option.
So, you precommit not to give in, and this decreases the chance that you’ll be threatened in the first place.
The question is, what’s the difference between the two, formally? Neither actually happened, both are counterfactual. (The assumption is that you are already facing a blackmail attempt, trying to decide whether to give in.)
This refers to a significant surprising conclusion in decision theory (at least, UDT-style): which action is correct depends on how you reason about logically impossible situations, so it’s important to reason about the logically impossible situations correctly. But it’s still not clear where the criteria for correctness of such reasoning should come from.
One is a case where a precommitment makes a difference, the other isn’t.
Had you convincingly precommitted not to giving in to blackmail* you would not have been blackmailed.
Had you convincingly precommitted to getting the FSM to grant your blackmailer $1000, the FSM still wouldn’t exist.
*(which is not an impossible counterfactual+ it’s something that could have happened, with only relatively minor changes to the world.)
+[unless you want to define “impossible” such that anything which doesn’t happen was impossible, at which point it’s not an unpossible counterfactual, and I’m annoyed :p]
A logically impossible situation is one which couldn’t happen in any logically consistent world. There are plenty of logically consistent worlds in which the person blackmailing you instead doesn’t.
So, it’s definitely not logically impossible. You could call it impossible (though, as above, that non-standard usage would irritate me) but it’s not logically impossible.
Obviously, the question is, why, what feature allows you to make that distinction.
The open question is how to reason about these situations and know to distinguish them in such reasoning.
“Worlds” can’t be logically consistent or inconsistent, at least it’s not clear what is the referent of the term “inconsistent world”, other than “no information”.
And again, why would one care about existence of some world where something is possible, if it’s not the world one wants to control? If the definition of what you care about is included, the facts that place a situation in contradiction with that definition make the result inconsistent.
Well, in one case, there are a set of alterations you could make to your past self’s mind that would change the events.
In the other, there aren’t.
Because it allows you to consistently reason about cause and effect efficiently.
If it’s not about the effect in the actual world, why is it relevant?
If I ask “What will happen if I don’t attempt to increase my rationality” I’m reasoning about counterfactuals.
Is that not about cause and effect in the real world?
Counterfactuals ARE about the actual world. They’re a way of analysing the chains of cause and effect.
If you can’t reason about cause and effect (and with your inability to understand why precomitting can’t bring the FSM into existence, I get the impression you’re having trouble there) you need tools. Counterfactuals are a tool for reasoning about cause and effect.
Only one of possible-to-not-blackmail or his-noodliness-exists is consistent with the evidence, to very high probabilities.
Worlds, in the Tegmark-ey sense of a collection of rules and initial conditions, can quite easily be consistent or inconsistent.
You seem to be beating a confusing retreat here. I bet there’s a better tack to take.
Couldn’t you also convincingly precommit to accept the corresponding positive-sum trade?
Yes. But why would you need to? In the positive-sum trade scenario, you’re gaining from the trade, so precommitting to accept it is unnecessary.
If you mean that I could precommit to only accept extremely favourable terms; well if I do that, they’ll choose someone else to trade with; just as the threatener would choose someone else to threaten
Them choosing to trade with someone else is bad for me. The threatener choosing someone else to threaten is good for me.
/\ That is, in many ways, the most important distinction between the scenarios. I want the threatener to pick someone else. I want the trader to pick me.
The blackmailer has the option of backing down at any point, and letting you go for free. It may be unlikely, but it’s not logically impossible.
“Give me $1000 or I’ll blow up your car!”
“I have a longstanding history of not negotiating with terrorists. In fact, last month someone slashed my tires because I wouldn’t give them $20. Check the police blotter if you don’t believe me.”
“Oh, alright. I’ll just take my bomb and go hassle someone more tractable.”
Assume it is, as part of the problem statement. Only allow agent-consistency (the agent can’t prove otherwise) of it being possible for the other player to not blackmail, without allowing actual logical consistency of that event. Also, assume that our agent has actually observed that the other decided to blackmail, and there is no possibility of causal negotiation.
(This helps to remove the wiggle-room in foggy reasoning about decision-making.)