To state the standard, boring, causality-abiding, metaphysics-free argument against This Type Of Reasoning About Decision-Theoretical Blackmail:
If “rational” strategy in responce to blackmail is to pay, and you give blackmailer the evidence that you are “rational” by payment, rational blackmailer has literally no reason to not blackmail you again and again. If you think that pay in responce to blackmail is rational, why would you change your strategy? Rational blackmailer wouldn’t ask you for 1000$ the second time, they will calculate the exact amount you value your life and will ask as much as you can pay, and when you go broke, they will force you borrow money and sell yourself to slavery, until you decide that such life doesn’t worth much and realize that you would have been better off, if you have decided to say blackmailer “let’s die in fire together”, spend the last day with your family and friends, and bequeath your money (which you actually send to blackmailer) to some charity which does things that you value. So rational agents don’t pay the blackmailer the very first time, because they don’t have moments “would have been better off if I have made another decision based on exactly the same information”.
Note: there is no arguments dependent on non-standard metaphysics! Just rational reasoning and it’s consequences.
To state the standard, boring, causality-abiding, metaphysics-free argument against This Type Of Reasoning About Decision-Theoretical Blackmail:
If “rational” strategy in responce to blackmail is to pay, and you give blackmailer the evidence that you are “rational” by payment, rational blackmailer has literally no reason to not blackmail you again and again. If you think that pay in responce to blackmail is rational, why would you change your strategy? Rational blackmailer wouldn’t ask you for 1000$ the second time, they will calculate the exact amount you value your life and will ask as much as you can pay, and when you go broke, they will force you borrow money and sell yourself to slavery, until you decide that such life doesn’t worth much and realize that you would have been better off, if you have decided to say blackmailer “let’s die in fire together”, spend the last day with your family and friends, and bequeath your money (which you actually send to blackmailer) to some charity which does things that you value. So rational agents don’t pay the blackmailer the very first time, because they don’t have moments “would have been better off if I have made another decision based on exactly the same information”.
Note: there is no arguments dependent on non-standard metaphysics! Just rational reasoning and it’s consequences.
Agreed, thanks for stating the boring solution to actual blackmail that works for all decision theories.
You can modify the blackmail hypothetical to include that the universe is configured such that the blackmail curse only works once.
Or you can modify it such that the curse destroys the universe, so that there isn’t a charity you can bequeath to.
At some point hypotheticals become so unlikely that agents should figure out they’re in a hypothetical and act accordingly.