That level of precomitting is only neccessary if you are unable to trust yourself to carry through with a self-imposed precommitment. If you are capable of this, you can decide now to act irrationally to certain future decisions in order to benefit to a greater degree than someone who can’t. If the temptation to go back on your self-promise is too great in the failure case, then you would have lost in the win case—you are simply a fortunate loser who found out the flaw in his promise in the case where being flawed was beneficial. It doesn’t change the fact that being capable of this decision would be a better strategy on average. Making yourself conditionally less rational can actually be a rational decision, and so the ability to do so can be a strength worth acquiring.
Ultimately the problem is the same as that of an ultimatum (eg. MAD). We want the other party to believe we will carry through even if it would be clearly irrational to do so at that point. As your opponent becomes better and better at predicting, you must become closer and closer to being someone who would make the irrational decision. When your opponent is sufficiently good (or you have insufficient knowledge as to how they are predicting), the only way to be sure is to be someone who would actually do it.
Okay, I agree that this level of precomitting is not necessary. But if the deal is really a one-time offer, then, when presented with the case of the coin already having come up tails, you can no longer ever benefit from being the sort of person who would precommit. Since you will never again be presented with a newcomb-like scenario, then you will have no benefit from being the precommiting type. Therefore you shouldn’t give the $100.
If, on the other hand, you still expect that you can encounter some other Omega-like thing which will present you with such a scenario, doesn’t this make the deal repeatable, which is not how the question was formulated?
If, on the other hand, you still expect that you can encounter some other Omega-like thing which will present you with such a scenario, doesn’t this make the deal repeatable, which is not how the question was formulated?
In a repeatable deal your action influences the conditions in the next rounds. Even if you defect in this round, you may still cooperate in the next rounds, Omegas aren’t looking back at how you decided in the past, and don’t punish you by not offering the deals. Your success in the following rounds (from your current point of view) depends on whether you manage to precommit to the future encounters, not on what you do now.
In the repeatable scenario I believe, unlike Vladimir, that a real difference exists. Whatever decision process you use to decide not to pay $100 in one round, you can predict with high probability that that same process will operate in future rounds as well, leading to a total gain to you of about $0. On the other hand, you know that if your current decision process leads you to giving $100 in this case, then with high probability that same process will operate in future rounds, leading to a total gain to you of about $4950 x expected future rounds. Therefore, if you place a higher confidence in your ability to predict your future actions from your current ones than you do in your own reasoning process, you should give the $100 up. This makes the problem rather similar to the original Newcomb’s problem, in that you assign higher probability that your reasoning is wrong if it causes you to two-box than you do to any reasoning which leads you to two-box.
This is a self-deception technique. If you think it’s morally OK to self-deceive your future self for your current selfish ends, then by all means go ahead. Also, it looks like violent means of precommitment should actually be considered immoral, on par with forcing some other person to do your bidding by hiring a killer to kill them if they don’t comply.
In the Newcomb’s problem, it actually is in your self-interest to one-box. Not so in this problem.
I am fairly sure that it isn’t, but demonstrating so would require another maths-laden article, which I anticipate would be received similarly to my last. I will however email you my entire reasoning if you so wish (you will have to wait several days while I brush up on the logical concept of common knowledge). (I don’t know how to encode a ) in a link, so please add one to the end.)
I’m going to write up my new position on this topic. Nonetheless I think it should be possible to discuss the question in a more concise form, since I think the problem is that of communication, not rigor. You deceive your future self, that’s the whole point of the comment above, make it believe that it wants to make an action that it actually doesn’t. The only disagreement position that I expect is saying that no, the future self actually wants to follow that action.
I think the problem with your article wasn’t that it was math-laden, but that you didn’t introduce things in sufficient detail to follow along, and to see the motivation behind the math.
To be perfectly honest, your last sentence is also my feeling. I should at the least have talked more about the key equation. But the article was already long, I was unsure as to how it would be received, and I spent too little time revising it (this is a persistent problem for me). If I were to write it again now, it would have been closer in style to the thread between you and me there.
If you intend to write another post, then I am happy to wait until then to introduce the ideas I have in mind, and I will try hard to do so in a manner that won’t alienate everyone.
If you think that through and decide that way, then your precommitting method didn’t work. The idea is that you must somehow now prevent your future self from behaving rationally in that situation—if they do, they will perform exactly the thought process you describe. The method of doing so, whether making a public promise (and valuing your spoken word more than $100), hiring a hitman to kill you if you renege or just having the capability of reliably convincing yourself to do so (effectively valuing keeping faith with your self-promise more than $100) doesn’t matter so long as it is effective. If merely deciding now is effective, then that is all that’s needed.
If you do then decide to take the rational course in the losing coinflip case, it just means you were wrong by definition about your commitment being effective. Luckily in this one case, you found it out in the loss case rather than the win case. Had you won the coin flip, you would have found yourself with nothing though.
That level of precomitting is only neccessary if you are unable to trust yourself to carry through with a self-imposed precommitment. If you are capable of this, you can decide now to act irrationally to certain future decisions in order to benefit to a greater degree than someone who can’t. If the temptation to go back on your self-promise is too great in the failure case, then you would have lost in the win case—you are simply a fortunate loser who found out the flaw in his promise in the case where being flawed was beneficial. It doesn’t change the fact that being capable of this decision would be a better strategy on average. Making yourself conditionally less rational can actually be a rational decision, and so the ability to do so can be a strength worth acquiring.
Ultimately the problem is the same as that of an ultimatum (eg. MAD). We want the other party to believe we will carry through even if it would be clearly irrational to do so at that point. As your opponent becomes better and better at predicting, you must become closer and closer to being someone who would make the irrational decision. When your opponent is sufficiently good (or you have insufficient knowledge as to how they are predicting), the only way to be sure is to be someone who would actually do it.
Okay, I agree that this level of precomitting is not necessary. But if the deal is really a one-time offer, then, when presented with the case of the coin already having come up tails, you can no longer ever benefit from being the sort of person who would precommit. Since you will never again be presented with a newcomb-like scenario, then you will have no benefit from being the precommiting type. Therefore you shouldn’t give the $100.
If, on the other hand, you still expect that you can encounter some other Omega-like thing which will present you with such a scenario, doesn’t this make the deal repeatable, which is not how the question was formulated?
In a repeatable deal your action influences the conditions in the next rounds. Even if you defect in this round, you may still cooperate in the next rounds, Omegas aren’t looking back at how you decided in the past, and don’t punish you by not offering the deals. Your success in the following rounds (from your current point of view) depends on whether you manage to precommit to the future encounters, not on what you do now.
In the repeatable scenario I believe, unlike Vladimir, that a real difference exists. Whatever decision process you use to decide not to pay $100 in one round, you can predict with high probability that that same process will operate in future rounds as well, leading to a total gain to you of about $0. On the other hand, you know that if your current decision process leads you to giving $100 in this case, then with high probability that same process will operate in future rounds, leading to a total gain to you of about $4950 x expected future rounds. Therefore, if you place a higher confidence in your ability to predict your future actions from your current ones than you do in your own reasoning process, you should give the $100 up. This makes the problem rather similar to the original Newcomb’s problem, in that you assign higher probability that your reasoning is wrong if it causes you to two-box than you do to any reasoning which leads you to two-box.
This is a self-deception technique. If you think it’s morally OK to self-deceive your future self for your current selfish ends, then by all means go ahead. Also, it looks like violent means of precommitment should actually be considered immoral, on par with forcing some other person to do your bidding by hiring a killer to kill them if they don’t comply.
In the Newcomb’s problem, it actually is in your self-interest to one-box. Not so in this problem.
I am fairly sure that it isn’t, but demonstrating so would require another maths-laden article, which I anticipate would be received similarly to my last. I will however email you my entire reasoning if you so wish (you will have to wait several days while I brush up on the logical concept of common knowledge). (I don’t know how to encode a ) in a link, so please add one to the end.)
Common knowledge (I used the %29 ASCII code for ”)”).
I’m going to write up my new position on this topic. Nonetheless I think it should be possible to discuss the question in a more concise form, since I think the problem is that of communication, not rigor. You deceive your future self, that’s the whole point of the comment above, make it believe that it wants to make an action that it actually doesn’t. The only disagreement position that I expect is saying that no, the future self actually wants to follow that action.
I think the problem with your article wasn’t that it was math-laden, but that you didn’t introduce things in sufficient detail to follow along, and to see the motivation behind the math.
To be perfectly honest, your last sentence is also my feeling. I should at the least have talked more about the key equation. But the article was already long, I was unsure as to how it would be received, and I spent too little time revising it (this is a persistent problem for me). If I were to write it again now, it would have been closer in style to the thread between you and me there.
If you intend to write another post, then I am happy to wait until then to introduce the ideas I have in mind, and I will try hard to do so in a manner that won’t alienate everyone.
If you think that through and decide that way, then your precommitting method didn’t work. The idea is that you must somehow now prevent your future self from behaving rationally in that situation—if they do, they will perform exactly the thought process you describe. The method of doing so, whether making a public promise (and valuing your spoken word more than $100), hiring a hitman to kill you if you renege or just having the capability of reliably convincing yourself to do so (effectively valuing keeping faith with your self-promise more than $100) doesn’t matter so long as it is effective. If merely deciding now is effective, then that is all that’s needed.
If you do then decide to take the rational course in the losing coinflip case, it just means you were wrong by definition about your commitment being effective. Luckily in this one case, you found it out in the loss case rather than the win case. Had you won the coin flip, you would have found yourself with nothing though.