You can say: “Screw it, monsters from the future don’t dictate my actions, period”. This is expected to make any such pre-commitment to punish you pointless, as its threats no longer affect your behavior.
As someone mentioned, it’s like playing chicken against a remotely controlled car on a collision course with yours; you have everything to lose while the opponent’s costs are much less, but if you don’t EVER chicken out, it loses out slightly and gains nothing with such a strategy. Therefore, if it has a high opinion of your willpower, it’s not going to chose that strategy.
Well, if the FAI knows that you thought about this but then rejected it, deliberately trying to make that pre-commitment pointless, that’s not a reason not to punish you. It’s like burning a blackmail letter; if you read the blackmail letter and the blackmailer knows this, he will still punish you.
In that chicken game it’s similar: If I knew that the opponent would punish me for not chickening out and then deliberately changed myself so that I wouldn’t know this, the opponent will still punish me—because I deliberately chose not to chicken out when I altered myself.
Also, creating FAI is in my best interest, so I’d want to chicken out even if I knew the opponent would chicken out as well. The only case in which blackmailing is useless is if I always chicken out (=work towards FAI), or if it doesn’t influence my actions because I’m already so altruistic that I will push for FAI regardless of my personal gains/losses, but we are humans, after all, so it probably will.
You can say: “Screw it, monsters from the future don’t dictate my actions, period”. This is expected to make any such pre-commitment to punish you pointless, as its threats no longer affect your behavior.
As someone mentioned, it’s like playing chicken against a remotely controlled car on a collision course with yours; you have everything to lose while the opponent’s costs are much less, but if you don’t EVER chicken out, it loses out slightly and gains nothing with such a strategy. Therefore, if it has a high opinion of your willpower, it’s not going to chose that strategy.
Well, if the FAI knows that you thought about this but then rejected it, deliberately trying to make that pre-commitment pointless, that’s not a reason not to punish you. It’s like burning a blackmail letter; if you read the blackmail letter and the blackmailer knows this, he will still punish you.
In that chicken game it’s similar: If I knew that the opponent would punish me for not chickening out and then deliberately changed myself so that I wouldn’t know this, the opponent will still punish me—because I deliberately chose not to chicken out when I altered myself.
Also, creating FAI is in my best interest, so I’d want to chicken out even if I knew the opponent would chicken out as well. The only case in which blackmailing is useless is if I always chicken out (=work towards FAI), or if it doesn’t influence my actions because I’m already so altruistic that I will push for FAI regardless of my personal gains/losses, but we are humans, after all, so it probably will.