Reflexive inconsistency can manifest even in simpler decision theory experiments, such as the prisoner’s dilemma. Before being imprisoned, anyone would agree that cooperation is the best course of action. However, once in jail, an individual may doubt whether the other party will cooperate and consequently choose to defect.
In the Newcomb paradox, reflexive inconsistency becomes even more central, as it is the very essence of the problem. At one moment, I sincerely commit to a particular choice, but in the next moment, I sincerely defect from that commitment. A similar phenomenon occurs in the Hitchhiker problem, where an individual makes a promise in one context but is tempted to break it when the context changes.
Well, I think the prisoner’s dilemma and Hitchhiker problems are ones where some people just don’t accept that defecting is the right decision. That is, defecting is the right decision if (a) you care nothing at all for the other person’s welfare, (b) you care nothing for your reputation, or are certain that no one else will know what you did (including the person you are interacting with, if you ever encounter them again), and (c) you have no moral qualms about making a promise and then breaking it. I think the arguments about these problems amount to people saying that they are assuming (a), (b), and (c), but then objecting to the resulting conclusion because they aren’t really willing to assume at least one of (a), (b), or (c).
Now, if you assume, contrary to actual reality, that the other prisoner or the driver in the Hitchhiker problem are somehow able to tell whether you are going to keep your promise or not, then we get to the same situation as in Newcomb’s problem—in which the only “plausible” way they could make such a prediction is by creating a simulated copy of you and seeing what you do in the simulation. But then, you don’t know whether you are the simulated or real version, so simple application of causal decision theory leads you keep your promise to cooperate or pay, since if you are the simulated copy that has a causal effect on the fate of the real you.
I think that people reason that if everyone will constantly defect, we will get less trustworsy society, where life is dangerous and complex projects are impossible.
Reflexive inconsistency can manifest even in simpler decision theory experiments, such as the prisoner’s dilemma. Before being imprisoned, anyone would agree that cooperation is the best course of action. However, once in jail, an individual may doubt whether the other party will cooperate and consequently choose to defect.
In the Newcomb paradox, reflexive inconsistency becomes even more central, as it is the very essence of the problem. At one moment, I sincerely commit to a particular choice, but in the next moment, I sincerely defect from that commitment. A similar phenomenon occurs in the Hitchhiker problem, where an individual makes a promise in one context but is tempted to break it when the context changes.
Well, I think the prisoner’s dilemma and Hitchhiker problems are ones where some people just don’t accept that defecting is the right decision. That is, defecting is the right decision if (a) you care nothing at all for the other person’s welfare, (b) you care nothing for your reputation, or are certain that no one else will know what you did (including the person you are interacting with, if you ever encounter them again), and (c) you have no moral qualms about making a promise and then breaking it. I think the arguments about these problems amount to people saying that they are assuming (a), (b), and (c), but then objecting to the resulting conclusion because they aren’t really willing to assume at least one of (a), (b), or (c).
Now, if you assume, contrary to actual reality, that the other prisoner or the driver in the Hitchhiker problem are somehow able to tell whether you are going to keep your promise or not, then we get to the same situation as in Newcomb’s problem—in which the only “plausible” way they could make such a prediction is by creating a simulated copy of you and seeing what you do in the simulation. But then, you don’t know whether you are the simulated or real version, so simple application of causal decision theory leads you keep your promise to cooperate or pay, since if you are the simulated copy that has a causal effect on the fate of the real you.
I think that people reason that if everyone will constantly defect, we will get less trustworsy society, where life is dangerous and complex projects are impossible.
Yes. And that reasoning is implicitly denying at least one of (a), (b), or (c).