In the repeatable scenario I believe, unlike Vladimir, that a real difference exists. Whatever decision process you use to decide not to pay $100 in one round, you can predict with high probability that that same process will operate in future rounds as well, leading to a total gain to you of about $0. On the other hand, you know that if your current decision process leads you to giving $100 in this case, then with high probability that same process will operate in future rounds, leading to a total gain to you of about $4950 x expected future rounds. Therefore, if you place a higher confidence in your ability to predict your future actions from your current ones than you do in your own reasoning process, you should give the $100 up. This makes the problem rather similar to the original Newcomb’s problem, in that you assign higher probability that your reasoning is wrong if it causes you to two-box than you do to any reasoning which leads you to two-box.
This is a self-deception technique. If you think it’s morally OK to self-deceive your future self for your current selfish ends, then by all means go ahead. Also, it looks like violent means of precommitment should actually be considered immoral, on par with forcing some other person to do your bidding by hiring a killer to kill them if they don’t comply.
In the Newcomb’s problem, it actually is in your self-interest to one-box. Not so in this problem.
I am fairly sure that it isn’t, but demonstrating so would require another maths-laden article, which I anticipate would be received similarly to my last. I will however email you my entire reasoning if you so wish (you will have to wait several days while I brush up on the logical concept of common knowledge). (I don’t know how to encode a ) in a link, so please add one to the end.)
I’m going to write up my new position on this topic. Nonetheless I think it should be possible to discuss the question in a more concise form, since I think the problem is that of communication, not rigor. You deceive your future self, that’s the whole point of the comment above, make it believe that it wants to make an action that it actually doesn’t. The only disagreement position that I expect is saying that no, the future self actually wants to follow that action.
I think the problem with your article wasn’t that it was math-laden, but that you didn’t introduce things in sufficient detail to follow along, and to see the motivation behind the math.
To be perfectly honest, your last sentence is also my feeling. I should at the least have talked more about the key equation. But the article was already long, I was unsure as to how it would be received, and I spent too little time revising it (this is a persistent problem for me). If I were to write it again now, it would have been closer in style to the thread between you and me there.
If you intend to write another post, then I am happy to wait until then to introduce the ideas I have in mind, and I will try hard to do so in a manner that won’t alienate everyone.
In the repeatable scenario I believe, unlike Vladimir, that a real difference exists. Whatever decision process you use to decide not to pay $100 in one round, you can predict with high probability that that same process will operate in future rounds as well, leading to a total gain to you of about $0. On the other hand, you know that if your current decision process leads you to giving $100 in this case, then with high probability that same process will operate in future rounds, leading to a total gain to you of about $4950 x expected future rounds. Therefore, if you place a higher confidence in your ability to predict your future actions from your current ones than you do in your own reasoning process, you should give the $100 up. This makes the problem rather similar to the original Newcomb’s problem, in that you assign higher probability that your reasoning is wrong if it causes you to two-box than you do to any reasoning which leads you to two-box.
This is a self-deception technique. If you think it’s morally OK to self-deceive your future self for your current selfish ends, then by all means go ahead. Also, it looks like violent means of precommitment should actually be considered immoral, on par with forcing some other person to do your bidding by hiring a killer to kill them if they don’t comply.
In the Newcomb’s problem, it actually is in your self-interest to one-box. Not so in this problem.
I am fairly sure that it isn’t, but demonstrating so would require another maths-laden article, which I anticipate would be received similarly to my last. I will however email you my entire reasoning if you so wish (you will have to wait several days while I brush up on the logical concept of common knowledge). (I don’t know how to encode a ) in a link, so please add one to the end.)
Common knowledge (I used the %29 ASCII code for ”)”).
I’m going to write up my new position on this topic. Nonetheless I think it should be possible to discuss the question in a more concise form, since I think the problem is that of communication, not rigor. You deceive your future self, that’s the whole point of the comment above, make it believe that it wants to make an action that it actually doesn’t. The only disagreement position that I expect is saying that no, the future self actually wants to follow that action.
I think the problem with your article wasn’t that it was math-laden, but that you didn’t introduce things in sufficient detail to follow along, and to see the motivation behind the math.
To be perfectly honest, your last sentence is also my feeling. I should at the least have talked more about the key equation. But the article was already long, I was unsure as to how it would be received, and I spent too little time revising it (this is a persistent problem for me). If I were to write it again now, it would have been closer in style to the thread between you and me there.
If you intend to write another post, then I am happy to wait until then to introduce the ideas I have in mind, and I will try hard to do so in a manner that won’t alienate everyone.