I hope I am not intercepting a series of questions when you were only interested in gjm’s response but I enjoyed your comment and wanted to add my thoughts.
I think that the problem with this sort of arguments is that it’s like cooperating in prisoner’s dilemma hoping that superrationality will make the other player cooperate: It doesn’t work.
I am not sure it is settled that it does not work, but I also do not think that most, or maybe any, of my argument relies on an assumption that it does. The first part of it does not even rely on an assumption that one-boxing is reasonable, let alone correct. All it says is that so long as some people play the game this way, as an empirical, descriptive reality of how they actually play, that we are more likely to see certain outcomes in situations that look like Newcomb. This looks like Newcomb.
There is also a second argument further down that suggests that under some circumstances with really high reward, and relatively little cost, that it might be worth trying to “cooperate on the prisoner’s dilemma” as a sort of gamble. This is more susceptible to game theoretic counterpoints, but it is also not put up as an especially strong argument so much as something worth considering more.
It seems that lots of people here conflate Newcomb’s problem, which is a very unusual single-player decision problem, with prisoner’s dilemma, which is the prototypical competitive game from game theory.
I am pretty sure I am not doing that, but if you wanted to expand on that, especially if you can show that I am, that would be fantastic.
Also, I don’t see why I should consider an accurate simulation of me, from my birth to my death, ran after my real death as a form of afterlife. How would it be functionally different than screening a movie of my life?
So, just to be clear, this is not my point at all. I think I was not nearly clear enough on this in the initial post, and I have updated it with a short-ish edit that you might want to read. I personally find the teletransportation paradox pretty paralyzing, enough so that I would have sincere brain-upload concerns. What I am talking about is simulations of non-specific, unique, people in the simulation. After death, these people would be “moved” fully intact into the afterlife component of the simulation. This circumvents teletransportation. Having the vast majority of people “like us” exist in simulations should increase our credence that we are in a simulation just as they are (especially if they can run simulations of their own, or think they are running simulations of their own). The idea is that we will have more reason to think that it is likely one-boxer/altruist/acausal trade types “above” us have similarly created many simulations, of which we are one. Us doing it here should increase our sense that people “like us” have done it “above” us.
I hope I am not intercepting a series of questions when you were only interested in gjm’s response but I enjoyed your comment and wanted to add my thoughts.
I am not sure it is settled that it does not work, but I also do not think that most, or maybe any, of my argument relies on an assumption that it does. The first part of it does not even rely on an assumption that one-boxing is reasonable, let alone correct. All it says is that so long as some people play the game this way, as an empirical, descriptive reality of how they actually play, that we are more likely to see certain outcomes in situations that look like Newcomb. This looks like Newcomb.
There is also a second argument further down that suggests that under some circumstances with really high reward, and relatively little cost, that it might be worth trying to “cooperate on the prisoner’s dilemma” as a sort of gamble. This is more susceptible to game theoretic counterpoints, but it is also not put up as an especially strong argument so much as something worth considering more.
I am pretty sure I am not doing that, but if you wanted to expand on that, especially if you can show that I am, that would be fantastic.
So, just to be clear, this is not my point at all. I think I was not nearly clear enough on this in the initial post, and I have updated it with a short-ish edit that you might want to read. I personally find the teletransportation paradox pretty paralyzing, enough so that I would have sincere brain-upload concerns. What I am talking about is simulations of non-specific, unique, people in the simulation. After death, these people would be “moved” fully intact into the afterlife component of the simulation. This circumvents teletransportation. Having the vast majority of people “like us” exist in simulations should increase our credence that we are in a simulation just as they are (especially if they can run simulations of their own, or think they are running simulations of their own). The idea is that we will have more reason to think that it is likely one-boxer/altruist/acausal trade types “above” us have similarly created many simulations, of which we are one. Us doing it here should increase our sense that people “like us” have done it “above” us.