Well, I think the prisoner’s dilemma and Hitchhiker problems are ones where some people just don’t accept that defecting is the right decision. That is, defecting is the right decision if (a) you care nothing at all for the other person’s welfare, (b) you care nothing for your reputation, or are certain that no one else will know what you did (including the person you are interacting with, if you ever encounter them again), and (c) you have no moral qualms about making a promise and then breaking it. I think the arguments about these problems amount to people saying that they are assuming (a), (b), and (c), but then objecting to the resulting conclusion because they aren’t really willing to assume at least one of (a), (b), or (c).
Now, if you assume, contrary to actual reality, that the other prisoner or the driver in the Hitchhiker problem are somehow able to tell whether you are going to keep your promise or not, then we get to the same situation as in Newcomb’s problem—in which the only “plausible” way they could make such a prediction is by creating a simulated copy of you and seeing what you do in the simulation. But then, you don’t know whether you are the simulated or real version, so simple application of causal decision theory leads you keep your promise to cooperate or pay, since if you are the simulated copy that has a causal effect on the fate of the real you.
I think that people reason that if everyone will constantly defect, we will get less trustworsy society, where life is dangerous and complex projects are impossible.
Well, I think the prisoner’s dilemma and Hitchhiker problems are ones where some people just don’t accept that defecting is the right decision. That is, defecting is the right decision if (a) you care nothing at all for the other person’s welfare, (b) you care nothing for your reputation, or are certain that no one else will know what you did (including the person you are interacting with, if you ever encounter them again), and (c) you have no moral qualms about making a promise and then breaking it. I think the arguments about these problems amount to people saying that they are assuming (a), (b), and (c), but then objecting to the resulting conclusion because they aren’t really willing to assume at least one of (a), (b), or (c).
Now, if you assume, contrary to actual reality, that the other prisoner or the driver in the Hitchhiker problem are somehow able to tell whether you are going to keep your promise or not, then we get to the same situation as in Newcomb’s problem—in which the only “plausible” way they could make such a prediction is by creating a simulated copy of you and seeing what you do in the simulation. But then, you don’t know whether you are the simulated or real version, so simple application of causal decision theory leads you keep your promise to cooperate or pay, since if you are the simulated copy that has a causal effect on the fate of the real you.
I think that people reason that if everyone will constantly defect, we will get less trustworsy society, where life is dangerous and complex projects are impossible.
Yes. And that reasoning is implicitly denying at least one of (a), (b), or (c).