It seems to me that your argument proves too much.
Let’s set aside this specific example and consider something more everyday: making promises. It is valuable to be able to make promises that others will believe, even when they are promises to do something that (once the relevant situation arises) you will strongly prefer not to do.
Suppose I want a $1000 loan, with $1100 to be repaid one year from now. My counterparty Bob has no trust in the legal system, police, etc., and expects that next year I will be somewhere where he can’t easily find me and force me to pay up. But I really need the money. Fortunately, Bob knows some mad scientists and we agree to the following: I will have implanted in my body a device that will kill me if 366 days from now I haven’t paid up. I get the money. I pay up. Nobody dies. Yay.
I hope we are agreed that (granted the rather absurd premises involved) I should be glad to have this option, even though in the case where I don’t pay up it kills me.
Revised scenario: Bob knows some mad psychologists who by some combination of questioning, brain scanning, etc., are able to determine very reliably what future choices I will make in any given situation. He also knows that in a year’s time I might (but with extremely low probability) be in a situation where I can only save my life at the cost of the $1100 that I owe him. He has no risk tolerance to speak of and will not lend me the money if in that situation I would choose to save my life and not give him the money.
Granted these (again absurd) premises, do you agree with me that it is to my advantage to have the sort of personality that can promise to pay Bob back even if it literally kills me?
It seems to me that: 1. Your argument in this thread would tell me, a year down the line and in the surprising situation that I do in fact need to choose between Bob’s money and my life, “save your life, obviously”. 2. If my personality were such that I would do as you advise in that situation, then Bob will not lend me the money. (Which may in fact mean that in that unlikely future situation I die anyway.) 3. Your reasons for saying “FDT recommends knowingly choosing to burn to death! So much the worse for FDT!”, are equally reasons to say “Being someone who can make and keep this sort of promise means knowingly choosing to pay up and die! So much the worse for being that sort of person!”. 4. Being that sort of person is not in fact worse even though there are situations in which it leads to a worse outcome. 5. There is no version of “being that sort of person” that lets you just decide to live, in that unlikely situation, because paying up at the cost of your own life is what “being that sort of person” means. 6. To whatever extent I get to choose whether to be that sort of person, I have to make the decision before I know whether I’m going to be in that unlikely situation. And, to whatever extent I get to choose, it is reasonable to choose to be that sort of person, because the net benefit is greater. 7. Once again, “be that sort of person and then change your mind” is not one of the available options; if I will change my mind about it, then I was never that sort of person after all.
What (if anything) do you disagree with in that paragraph? What (if anything) do you find relevantly disanalogous between the situation I describe here and the one with the bomb?
Granted these (again absurd) premises, do you agree with me that it is to my advantage to have the sort of personality that can promise to pay Bob back even if it literally kills me?
I do not.
What (if anything) do you disagree with in that paragraph? What (if anything) do you find relevantly disanalogous between the situation I describe here and the one with the bomb?
Your scenario omits the crucial element of the scenario in the OP, where you (the subject) find yourself in a situation where the predictor turns out to have erred in its prediction.
Hmm. I am genuinely quite baffled by this; there seems to be some very fundamental difference in how we are looking at the world. Let me just check that this is a real disagreement and not a misunderstanding (even if it is there would also be a real disagreement, but a different one): I am asking not “do you agree with me that at the point where I have to choose between dying and failing to repay Bob it is to my advantage …” but “do you agree with me that at an earlier point, say when I am negotiating with Bob it is to my advantage …”.
If I am understanding you right and you are understanding me right, then I think the following is true. Suppose that when Bob has explained his position (he is willing to lend me the money if, and only if, his mad scientists determine that I will definitely repay him even if the alternative is death), some supernatural being magically informs me that while it cannot lend me the money it can make me the sort of person who can make the kind of commitment Bob wants and actually follow through. I think you would recommend that I either not accept this offer, or at any rate not make that commitment having been empowered to do so.
Do you feel the same way about the first scenario, where instead of choosing to be a person who will pay up even at the price of death I choose to be a person who will be compelled by brute force to pay up or die? If not, why?
Your scenario omits the crucial element of the scenario in the OP, where you (the subject) find yourself in a situation where the predictor turns out to have erred in its prediction.
Why does that matter? (Maybe it doesn’t; your opinion about my scenario is AIUI the same as your opinion about the one in the OP.)
I am asking not “do you agree with me that at the point where I have to choose between dying and failing to repay Bob it is to my advantage …” but “do you agree with me that at an earlier point, say when I am negotiating with Bob it is to my advantage …”.
Yes, I understood you correctly. My answer stands. (But I appreciate the verification.)
I think you would recommend that I either not accept this offer, or at any rate not make that commitment having been empowered to do so.
Right.
Do you feel the same way about the first scenario, where instead of choosing to be a person who will pay up even at the price of death I choose to be a person who will be compelled by brute force to pay up or die? If not, why?
No, because there’s a difference between “pay up or die” and “pay up and die”.
Your scenario omits the crucial element of the scenario in the OP, where you (the subject) find yourself in a situation where the predictor turns out to have erred in its prediction.
Why does that matter? (Maybe it doesn’t; your opinion about my scenario is AIUI the same as your opinion about the one in the OP.)
The scenario in the OP seems to hinge on it. As described, the situation is that the agent has picked FDT as their decision theory, is absolutely the sort of agent who will choose the Left box and die if so predicted, who is thereby supposed to not actually encounter situations where the Left box has a bomb… but oops! The predictor messed up and there is a bomb there anyhow. And now the agent is left with a choice on which nothing depends except whether he pointlessly dies.
I agree (of course!) that there is a difference between “pay up and die” and “pay up or die”. But I don’t understand how this difference can be responsible for the difference in your opinions about the two scenarios.
Scenario 1: I choose for things to be so arranged that in unlikely situation S (where if I pay Bob back I die), if I don’t pay Bob back then I also die. You agree with me (I think—you haven’t actually said so explicitly) that it can be to my benefit for things to be this way, if this is the precondition for getting the loan from Bob.
Scenario 2: I choose for things to be so arranged that in unlikely scenario S (where, again, if I pay Bob back I die), I will definitely pay. You think this state of affairs can’t be to my advantage.
How is scenario 2 actually worse for me than scenario 1? Outside situation S, they are no different (I will not be faced with such strong incentive not to pay Bob back, and I will in fact pay him back, and I will not die). In situation S, scenario 1 means I die either way, so I might as well pay my debts; scenario 2 means I will pay up and die. I’m equally dead in each case. I choose to pay up in each case.
In scenario 1, I do have the option of saying a mental “fuck you” to Bob, not repaying my debt, and dying at the hand of his infernal machinery rather than whatever other thing I could save myself from with the money. But I’m equally dead either way, and I can’t see why I’d prefer this, and in any case it’s beyond my understanding why having this not-very-appealing extra option would be enough for scenario 1 to be good and scenario 2 to be bad.
What am I missing?
I think we are at cross purposes somehow about the “predictor turns out to have erred” thing. I do understand that this feature is present in the OP’s thought experiment and absent in mine. My thought experiment isn’t meant to be equivalent to the one in the OP, though it is meant to be similar in some ways (and I think we are agreed that it is similar in the ways I intended it to be similar). It’s meant to give me another view of something in your thinking that I don’t understand, in the hope that I might understand it better (hopefully with the eventual effect of improving either my thinking or yours, if it turns out that one of us is making a mistake rather than just starting from axioms that seem alien to one another).
Anyway, it probably doesn’t matter, because so far as I can tell you do in fact have “the same” opinion about the OP’s thought experiment and mine; I was asking about disanalogies between the two in case it turned out that you agreed with all the numbered points in the paragraph before that question. I think you don’t agree with them all, but I’m not sure exactly where the disagreements are; I might understand better if you could tell me which of those numbered points you disagree with.
It seems to me that your argument proves too much.
Let’s set aside this specific example and consider something more everyday: making promises. It is valuable to be able to make promises that others will believe, even when they are promises to do something that (once the relevant situation arises) you will strongly prefer not to do.
Suppose I want a $1000 loan, with $1100 to be repaid one year from now. My counterparty Bob has no trust in the legal system, police, etc., and expects that next year I will be somewhere where he can’t easily find me and force me to pay up. But I really need the money. Fortunately, Bob knows some mad scientists and we agree to the following: I will have implanted in my body a device that will kill me if 366 days from now I haven’t paid up. I get the money. I pay up. Nobody dies. Yay.
I hope we are agreed that (granted the rather absurd premises involved) I should be glad to have this option, even though in the case where I don’t pay up it kills me.
Revised scenario: Bob knows some mad psychologists who by some combination of questioning, brain scanning, etc., are able to determine very reliably what future choices I will make in any given situation. He also knows that in a year’s time I might (but with extremely low probability) be in a situation where I can only save my life at the cost of the $1100 that I owe him. He has no risk tolerance to speak of and will not lend me the money if in that situation I would choose to save my life and not give him the money.
Granted these (again absurd) premises, do you agree with me that it is to my advantage to have the sort of personality that can promise to pay Bob back even if it literally kills me?
It seems to me that: 1. Your argument in this thread would tell me, a year down the line and in the surprising situation that I do in fact need to choose between Bob’s money and my life, “save your life, obviously”. 2. If my personality were such that I would do as you advise in that situation, then Bob will not lend me the money. (Which may in fact mean that in that unlikely future situation I die anyway.) 3. Your reasons for saying “FDT recommends knowingly choosing to burn to death! So much the worse for FDT!”, are equally reasons to say “Being someone who can make and keep this sort of promise means knowingly choosing to pay up and die! So much the worse for being that sort of person!”. 4. Being that sort of person is not in fact worse even though there are situations in which it leads to a worse outcome. 5. There is no version of “being that sort of person” that lets you just decide to live, in that unlikely situation, because paying up at the cost of your own life is what “being that sort of person” means. 6. To whatever extent I get to choose whether to be that sort of person, I have to make the decision before I know whether I’m going to be in that unlikely situation. And, to whatever extent I get to choose, it is reasonable to choose to be that sort of person, because the net benefit is greater. 7. Once again, “be that sort of person and then change your mind” is not one of the available options; if I will change my mind about it, then I was never that sort of person after all.
What (if anything) do you disagree with in that paragraph? What (if anything) do you find relevantly disanalogous between the situation I describe here and the one with the bomb?
I do not.
Your scenario omits the crucial element of the scenario in the OP, where you (the subject) find yourself in a situation where the predictor turns out to have erred in its prediction.
Hmm. I am genuinely quite baffled by this; there seems to be some very fundamental difference in how we are looking at the world. Let me just check that this is a real disagreement and not a misunderstanding (even if it is there would also be a real disagreement, but a different one): I am asking not “do you agree with me that at the point where I have to choose between dying and failing to repay Bob it is to my advantage …” but “do you agree with me that at an earlier point, say when I am negotiating with Bob it is to my advantage …”.
If I am understanding you right and you are understanding me right, then I think the following is true. Suppose that when Bob has explained his position (he is willing to lend me the money if, and only if, his mad scientists determine that I will definitely repay him even if the alternative is death), some supernatural being magically informs me that while it cannot lend me the money it can make me the sort of person who can make the kind of commitment Bob wants and actually follow through. I think you would recommend that I either not accept this offer, or at any rate not make that commitment having been empowered to do so.
Do you feel the same way about the first scenario, where instead of choosing to be a person who will pay up even at the price of death I choose to be a person who will be compelled by brute force to pay up or die? If not, why?
Why does that matter? (Maybe it doesn’t; your opinion about my scenario is AIUI the same as your opinion about the one in the OP.)
Yes, I understood you correctly. My answer stands. (But I appreciate the verification.)
Right.
No, because there’s a difference between “pay up or die” and “pay up and die”.
The scenario in the OP seems to hinge on it. As described, the situation is that the agent has picked FDT as their decision theory, is absolutely the sort of agent who will choose the Left box and die if so predicted, who is thereby supposed to not actually encounter situations where the Left box has a bomb… but oops! The predictor messed up and there is a bomb there anyhow. And now the agent is left with a choice on which nothing depends except whether he pointlessly dies.
I see no analogous feature of your scenarios…
I agree (of course!) that there is a difference between “pay up and die” and “pay up or die”. But I don’t understand how this difference can be responsible for the difference in your opinions about the two scenarios.
Scenario 1: I choose for things to be so arranged that in unlikely situation S (where if I pay Bob back I die), if I don’t pay Bob back then I also die. You agree with me (I think—you haven’t actually said so explicitly) that it can be to my benefit for things to be this way, if this is the precondition for getting the loan from Bob.
Scenario 2: I choose for things to be so arranged that in unlikely scenario S (where, again, if I pay Bob back I die), I will definitely pay. You think this state of affairs can’t be to my advantage.
How is scenario 2 actually worse for me than scenario 1? Outside situation S, they are no different (I will not be faced with such strong incentive not to pay Bob back, and I will in fact pay him back, and I will not die). In situation S, scenario 1 means I die either way, so I might as well pay my debts; scenario 2 means I will pay up and die. I’m equally dead in each case. I choose to pay up in each case.
In scenario 1, I do have the option of saying a mental “fuck you” to Bob, not repaying my debt, and dying at the hand of his infernal machinery rather than whatever other thing I could save myself from with the money. But I’m equally dead either way, and I can’t see why I’d prefer this, and in any case it’s beyond my understanding why having this not-very-appealing extra option would be enough for scenario 1 to be good and scenario 2 to be bad.
What am I missing?
I think we are at cross purposes somehow about the “predictor turns out to have erred” thing. I do understand that this feature is present in the OP’s thought experiment and absent in mine. My thought experiment isn’t meant to be equivalent to the one in the OP, though it is meant to be similar in some ways (and I think we are agreed that it is similar in the ways I intended it to be similar). It’s meant to give me another view of something in your thinking that I don’t understand, in the hope that I might understand it better (hopefully with the eventual effect of improving either my thinking or yours, if it turns out that one of us is making a mistake rather than just starting from axioms that seem alien to one another).
Anyway, it probably doesn’t matter, because so far as I can tell you do in fact have “the same” opinion about the OP’s thought experiment and mine; I was asking about disanalogies between the two in case it turned out that you agreed with all the numbered points in the paragraph before that question. I think you don’t agree with them all, but I’m not sure exactly where the disagreements are; I might understand better if you could tell me which of those numbered points you disagree with.