I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.
So I am struggling to understand his reply to my argument. In some ways it simply looks like he’s saying either we are in a simulation or we are not, which is obviously true. The claim that we are probably living in a simulation (given a couple of assumptions) relies on observations of the current universe, which either are not reliable if we are in a simulation, or obviously are wrong if we aren’t in a simulation.
If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.
If we are not in a simulation, then the reasoning he uses does apply, so his conclusion is still true.
He’s saying that (3) doesn’t hold if we are not in a simulation, so either (1) or (2) is true. He’s not saying that if we’re not in a simulation, we somehow are actually in a simulation given this logic.
either we are in a simulation or we are not, which is obviously true
Just wanted to point out that this is not necessarily true; in a large enough multiverse, there would be many identical copies of a mind, some of which would probably be “real minds” dwelling in “real brains”, and some would be simulated.
I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.
(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.
He’s saying that (3) doesn’t hold if we are not in a simulation, so either (1) or (2) is true. He’s not saying that if we’re not in a simulation, we somehow are actually in a simulation given this logic.
Right. When I say “his conclusion is still true”, I mean the conclusion that at least one of (1), (2) or (3) is true. That is the conclusion of the simulation argument, not “we are living in a simulation”.
If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.
This, I think, is a possible difference between your position and Bostrom’s. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post).
I actually suspect that you are doing neither of these things, though. You seem to be simply denying that the minds our post-human descendants will simulate (if any) will be similar to our own minds. This is what your game AI comparisons suggest. In that case, your argument is not incompatible with Bostrom’s conclusion. Remember, the conclusion of the simulation argument is that either (1), (2), or (3) is true. You seem to be saying that (2) is true—that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants. If that’s all you’re claiming, then you’re not disagreeing with the simulation argument.
(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.
The negation of (1) and (2) are premises if the conclusion is (3). So when I say they are “true” I mean that, for example, in the first case, that humans WILL reach an advanced level of technological development. Probably a bit confusing, my mistake.
You seem to be saying that (2) is true—that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants.
I think Bostrom’s argument applies even if they aren’t “highly accurate”. If they are simulated at all, you can apply his argument. I think the core of his argument is that if simulated minds outnumber “real” minds, then it’s likely we are all simulated. I’m not really sure how us being “accurately simulated” minds changes things. It does make it easier to reason outside of our little box—if we are highly accurate simulations then we can actually know a lot about the real universe, and in fact studying our little box is pretty much akin to studying the real universe.
This, I think, is a possible difference between your position and Bostrom’s. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post).
Let’s assume I’m trying to make conclusions about the universe. I could be a brain in a vat, but there’s not really anything to be gained in assuming that. Whether it’s true or not, I may as well act as if the universe can be understood. Let’s say I conclude, from my observations about the universe, that there are many more simulated minds than non-simulated minds. Does it then follow that I am probably a simulated mind? Bostrom says yes. I say no, because my reasoning about the universe that led me to the conclusion that there are more simulated minds than non-simulated ones is predicated on me not being a simulated mind. I would almost say it’s impossible to reason your way into believing you’re in a simulation. It’s self-referential.
I’m going to have to think about this harder, but try and criticise what I’m saying as you have been doing because it certainly helps flesh things out in my mind.
I think Bostrom’s argument applies even if they aren’t “highly accurate”. If they are simulated at all, you can apply his argument.
I don’t think that’s true. The SSA will have different consequences if the simulated minds are expected to be very different from ours.
If we suppose that simulated minds will have very different observations, experiences and memories from our own, and we consider the hypothesis that the vast majority of minds in our universe will be simulated, then SSA simply disconfirms the hypothesis. If I should reason as if I am a random sample from the pool of all observers, then any theory which renders my observations highly atypical will be heavily disconfirmed. SSA will simply tell us it is unlikely that the vast majority of minds are simulated. Which means that either civilizations don’t get to the point of simulating minds or they choose not to run a significant number of simulations.
If, on the other hand, we suppose that a significant proportion of simulated minds will be quite similar to our own, with similar thoughts, memories and experiences, and we further assume that the vast majority of minds in the universe are simulated, then SSA tells us that we are likely simulated minds. It is only under those conditions that SSA delivers this verdict.
This is why, when Bostrom describes the Simulation Argument, he focuses on “ancestor-simulations”. In other words, he focuses on post-human civilizations running detailed simulations of their evolutionary history, not just simulations of any arbitrary mind. It is only under the assumption that post-human civilzations run ancestor-simulations that the SSA can be used to conclude that we are probably simulations (assuming that the other two possible conclusions of the argument are rejected).
So i think it matters very much to the argument that the simulated minds are a lot like the actual minds of the simulators’ ancestors. If not, the argument does not go through. This is why I said you seem to simply be accepting (2), the conclusion that post-human civilizations will not run a significant number of ancestor-simulations. Your position seems to be that the simulations will probably be radically dissimilar to the simulators (or their ancestors). That is equivalent to accepting (2), and does not conflict with the simulation argument.
You seem to consider the Simulation Argument similar to the Boltzmann brain paradox, which would raise the same worries about empirical incoherence that arise in that paradox, worries you summarize in the parent post. The reliability of the evidence that seems to point to me being a Boltzmann brain ts itself predicated on me not being a Boltzmann brain. But the restriction to ancestor-simulations makes the Simulation Argument importantly different from the Boltzmann brain paradox.
I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.
So I am struggling to understand his reply to my argument. In some ways it simply looks like he’s saying either we are in a simulation or we are not, which is obviously true. The claim that we are probably living in a simulation (given a couple of assumptions) relies on observations of the current universe, which either are not reliable if we are in a simulation, or obviously are wrong if we aren’t in a simulation.
If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.
He’s saying that (3) doesn’t hold if we are not in a simulation, so either (1) or (2) is true. He’s not saying that if we’re not in a simulation, we somehow are actually in a simulation given this logic.
Just wanted to point out that this is not necessarily true; in a large enough multiverse, there would be many identical copies of a mind, some of which would probably be “real minds” dwelling in “real brains”, and some would be simulated.
(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.
Right. When I say “his conclusion is still true”, I mean the conclusion that at least one of (1), (2) or (3) is true. That is the conclusion of the simulation argument, not “we are living in a simulation”.
This, I think, is a possible difference between your position and Bostrom’s. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post).
I actually suspect that you are doing neither of these things, though. You seem to be simply denying that the minds our post-human descendants will simulate (if any) will be similar to our own minds. This is what your game AI comparisons suggest. In that case, your argument is not incompatible with Bostrom’s conclusion. Remember, the conclusion of the simulation argument is that either (1), (2), or (3) is true. You seem to be saying that (2) is true—that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants. If that’s all you’re claiming, then you’re not disagreeing with the simulation argument.
The negation of (1) and (2) are premises if the conclusion is (3). So when I say they are “true” I mean that, for example, in the first case, that humans WILL reach an advanced level of technological development. Probably a bit confusing, my mistake.
I think Bostrom’s argument applies even if they aren’t “highly accurate”. If they are simulated at all, you can apply his argument. I think the core of his argument is that if simulated minds outnumber “real” minds, then it’s likely we are all simulated. I’m not really sure how us being “accurately simulated” minds changes things. It does make it easier to reason outside of our little box—if we are highly accurate simulations then we can actually know a lot about the real universe, and in fact studying our little box is pretty much akin to studying the real universe.
Let’s assume I’m trying to make conclusions about the universe. I could be a brain in a vat, but there’s not really anything to be gained in assuming that. Whether it’s true or not, I may as well act as if the universe can be understood. Let’s say I conclude, from my observations about the universe, that there are many more simulated minds than non-simulated minds. Does it then follow that I am probably a simulated mind? Bostrom says yes. I say no, because my reasoning about the universe that led me to the conclusion that there are more simulated minds than non-simulated ones is predicated on me not being a simulated mind. I would almost say it’s impossible to reason your way into believing you’re in a simulation. It’s self-referential.
I’m going to have to think about this harder, but try and criticise what I’m saying as you have been doing because it certainly helps flesh things out in my mind.
I don’t think that’s true. The SSA will have different consequences if the simulated minds are expected to be very different from ours.
If we suppose that simulated minds will have very different observations, experiences and memories from our own, and we consider the hypothesis that the vast majority of minds in our universe will be simulated, then SSA simply disconfirms the hypothesis. If I should reason as if I am a random sample from the pool of all observers, then any theory which renders my observations highly atypical will be heavily disconfirmed. SSA will simply tell us it is unlikely that the vast majority of minds are simulated. Which means that either civilizations don’t get to the point of simulating minds or they choose not to run a significant number of simulations.
If, on the other hand, we suppose that a significant proportion of simulated minds will be quite similar to our own, with similar thoughts, memories and experiences, and we further assume that the vast majority of minds in the universe are simulated, then SSA tells us that we are likely simulated minds. It is only under those conditions that SSA delivers this verdict.
This is why, when Bostrom describes the Simulation Argument, he focuses on “ancestor-simulations”. In other words, he focuses on post-human civilizations running detailed simulations of their evolutionary history, not just simulations of any arbitrary mind. It is only under the assumption that post-human civilzations run ancestor-simulations that the SSA can be used to conclude that we are probably simulations (assuming that the other two possible conclusions of the argument are rejected).
So i think it matters very much to the argument that the simulated minds are a lot like the actual minds of the simulators’ ancestors. If not, the argument does not go through. This is why I said you seem to simply be accepting (2), the conclusion that post-human civilizations will not run a significant number of ancestor-simulations. Your position seems to be that the simulations will probably be radically dissimilar to the simulators (or their ancestors). That is equivalent to accepting (2), and does not conflict with the simulation argument.
You seem to consider the Simulation Argument similar to the Boltzmann brain paradox, which would raise the same worries about empirical incoherence that arise in that paradox, worries you summarize in the parent post. The reliability of the evidence that seems to point to me being a Boltzmann brain ts itself predicated on me not being a Boltzmann brain. But the restriction to ancestor-simulations makes the Simulation Argument importantly different from the Boltzmann brain paradox.