The question is, what’s the difference between the two, formally?
One is a case where a precommitment makes a difference, the other isn’t.
Had you convincingly precommitted not to giving in to blackmail* you would not have been blackmailed.
Had you convincingly precommitted to getting the FSM to grant your blackmailer $1000, the FSM still wouldn’t exist.
*(which is not an impossible counterfactual+ it’s something that could have happened, with only relatively minor changes to the world.)
+[unless you want to define “impossible” such that anything which doesn’t happen was impossible, at which point it’s not an unpossible counterfactual, and I’m annoyed :p]
which action is correct depends on how you reason about logically impossible situations
A logically impossible situation is one which couldn’t happen in any logically consistent world. There are plenty of logically consistent worlds in which the person blackmailing you instead doesn’t.
So, it’s definitely not logically impossible. You could call it impossible (though, as above, that non-standard usage would irritate me) but it’s not logically impossible.
One is a case where a precommitment makes a difference, the other isn’t.
Obviously, the question is, why, what feature allows you to make that distinction.
Had you convincingly precommitted not to giving in to blackmail* you would not have been blackmailed.
Had you convincingly precommitted to getting the FSM to grant your blackmailer $1000, the FSM still wouldn’t exist.
The open question is how to reason about these situations and know to distinguish them in such reasoning.
A logically impossible situation is one which couldn’t happen in any logically consistent world.
“Worlds” can’t be logically consistent or inconsistent, at least it’s not clear what is the referent of the term “inconsistent world”, other than “no information”.
And again, why would one care about existence of some world where something is possible, if it’s not the world one wants to control? If the definition of what you care about is included, the facts that place a situation in contradiction with that definition make the result inconsistent.
If it’s not about the effect in the actual world, why is it relevant?
If I ask “What will happen if I don’t attempt to increase my rationality” I’m reasoning about counterfactuals.
Is that not about cause and effect in the real world?
Counterfactuals ARE about the actual world. They’re a way of analysing the chains of cause and effect.
If you can’t reason about cause and effect (and with your inability to understand why precomitting can’t bring the FSM into existence, I get the impression you’re having trouble there) you need tools. Counterfactuals are a tool for reasoning about cause and effect.
Yes. But why would you need to? In the positive-sum trade scenario, you’re gaining from the trade, so precommitting to accept it is unnecessary.
If you mean that I could precommit to only accept extremely favourable terms; well if I do that, they’ll choose someone else to trade with; just as the threatener would choose someone else to threaten
Them choosing to trade with someone else is bad for me. The threatener choosing someone else to threaten is good for me.
/\
That is, in many ways, the most important distinction between the scenarios. I want the threatener to pick someone else. I want the trader to pick me.
One is a case where a precommitment makes a difference, the other isn’t.
Had you convincingly precommitted not to giving in to blackmail* you would not have been blackmailed.
Had you convincingly precommitted to getting the FSM to grant your blackmailer $1000, the FSM still wouldn’t exist.
*(which is not an impossible counterfactual+ it’s something that could have happened, with only relatively minor changes to the world.)
+[unless you want to define “impossible” such that anything which doesn’t happen was impossible, at which point it’s not an unpossible counterfactual, and I’m annoyed :p]
A logically impossible situation is one which couldn’t happen in any logically consistent world. There are plenty of logically consistent worlds in which the person blackmailing you instead doesn’t.
So, it’s definitely not logically impossible. You could call it impossible (though, as above, that non-standard usage would irritate me) but it’s not logically impossible.
Obviously, the question is, why, what feature allows you to make that distinction.
The open question is how to reason about these situations and know to distinguish them in such reasoning.
“Worlds” can’t be logically consistent or inconsistent, at least it’s not clear what is the referent of the term “inconsistent world”, other than “no information”.
And again, why would one care about existence of some world where something is possible, if it’s not the world one wants to control? If the definition of what you care about is included, the facts that place a situation in contradiction with that definition make the result inconsistent.
Well, in one case, there are a set of alterations you could make to your past self’s mind that would change the events.
In the other, there aren’t.
Because it allows you to consistently reason about cause and effect efficiently.
If it’s not about the effect in the actual world, why is it relevant?
If I ask “What will happen if I don’t attempt to increase my rationality” I’m reasoning about counterfactuals.
Is that not about cause and effect in the real world?
Counterfactuals ARE about the actual world. They’re a way of analysing the chains of cause and effect.
If you can’t reason about cause and effect (and with your inability to understand why precomitting can’t bring the FSM into existence, I get the impression you’re having trouble there) you need tools. Counterfactuals are a tool for reasoning about cause and effect.
Only one of possible-to-not-blackmail or his-noodliness-exists is consistent with the evidence, to very high probabilities.
Worlds, in the Tegmark-ey sense of a collection of rules and initial conditions, can quite easily be consistent or inconsistent.
You seem to be beating a confusing retreat here. I bet there’s a better tack to take.
Couldn’t you also convincingly precommit to accept the corresponding positive-sum trade?
Yes. But why would you need to? In the positive-sum trade scenario, you’re gaining from the trade, so precommitting to accept it is unnecessary.
If you mean that I could precommit to only accept extremely favourable terms; well if I do that, they’ll choose someone else to trade with; just as the threatener would choose someone else to threaten
Them choosing to trade with someone else is bad for me. The threatener choosing someone else to threaten is good for me.
/\ That is, in many ways, the most important distinction between the scenarios. I want the threatener to pick someone else. I want the trader to pick me.