The “simulation argument” by Bostrom is flawed. It is wrong. I don’t understand why a lot of people seem to believe in it. I might do a write up of this if anyone agrees with me, but basically, you cannot reason about without our universe from within our universe. It doesn’t make sense to do so. The simulation argument is about using observations from within our own reality to describe something outside our reality. For example, simulations are or will be common in this universe, therefore most agents will be simulated agents, therefore we are simulated agents. However, the observation that most agents will eventually be or already are simulated only applies in this reality/universe. If we are in a simulation, all of our logic will not be universal but instead will be a reaction to the perverted rules set up by the simulation’s creators. If we’re not in a simulation, we’re not in a simulation. Either way, the simulation argument is flawed.
First, Bostrom is very explicit that the conclusion of his argument is not “We are probably living in a simulation”. The conclusion of his argument is that at least one of the following three claims is very likely to be true -- (1) humans won’t reach the post-human stage of technological development, (2) post-human civilizations will not run a significant number of simulations of their ancestral history, or (3) we are living in a simulation.
Second, Bostrom has addressed the objection you raise here (in his Simulation Argument FAQ, among other places). He essentially flips your disjunctive reasoning around. He argues that we are either in a simulation or we are not. if we are in simulation, then claim 3 is obviously true, by hypothesis. If we are not in a simulation, then our ordinary empirical evidence is a veridical guide to the universe (our universe, not some other universe). This means the evidence and assumptions used as the basis for the simulation argument are sound in our universe. It follows that since claim 3 is false by hypothesis, either claim 1 or claim 2 is very likely to be true. It’s worth noting that these two are claims about our universe, not about some parent universe.
In other words, your objection is based on the argument that if we are in a simulation, there is no good reason to trust the assumptions of the simulation argument (such as assumptions about how our simulators will behave). Bostrom’s reply is that if we are in a simulation, then his conclusion is true anyway, even if the specific reasoning he uses doesn’t apply. If we are not in a simulation, then the reasoning he uses does apply, so his conclusion is still true.
There does seem to be some sort of sleight-of-mind going on here, if you want my opinion. I generally feel that way about most non-trivial uses of anthropic reasoning. But the exact source of the sleight is not easy for me to detect. At the very least, Bostrom has a prima facie response to your objection, so you need to say something about why his response is flawed. Making your objection and Bostrom’s response mathematically precise would be a good way to track down the flaw (if any).
I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.
So I am struggling to understand his reply to my argument. In some ways it simply looks like he’s saying either we are in a simulation or we are not, which is obviously true. The claim that we are probably living in a simulation (given a couple of assumptions) relies on observations of the current universe, which either are not reliable if we are in a simulation, or obviously are wrong if we aren’t in a simulation.
If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.
If we are not in a simulation, then the reasoning he uses does apply, so his conclusion is still true.
He’s saying that (3) doesn’t hold if we are not in a simulation, so either (1) or (2) is true. He’s not saying that if we’re not in a simulation, we somehow are actually in a simulation given this logic.
either we are in a simulation or we are not, which is obviously true
Just wanted to point out that this is not necessarily true; in a large enough multiverse, there would be many identical copies of a mind, some of which would probably be “real minds” dwelling in “real brains”, and some would be simulated.
I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.
(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.
He’s saying that (3) doesn’t hold if we are not in a simulation, so either (1) or (2) is true. He’s not saying that if we’re not in a simulation, we somehow are actually in a simulation given this logic.
Right. When I say “his conclusion is still true”, I mean the conclusion that at least one of (1), (2) or (3) is true. That is the conclusion of the simulation argument, not “we are living in a simulation”.
If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.
This, I think, is a possible difference between your position and Bostrom’s. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post).
I actually suspect that you are doing neither of these things, though. You seem to be simply denying that the minds our post-human descendants will simulate (if any) will be similar to our own minds. This is what your game AI comparisons suggest. In that case, your argument is not incompatible with Bostrom’s conclusion. Remember, the conclusion of the simulation argument is that either (1), (2), or (3) is true. You seem to be saying that (2) is true—that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants. If that’s all you’re claiming, then you’re not disagreeing with the simulation argument.
(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.
The negation of (1) and (2) are premises if the conclusion is (3). So when I say they are “true” I mean that, for example, in the first case, that humans WILL reach an advanced level of technological development. Probably a bit confusing, my mistake.
You seem to be saying that (2) is true—that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants.
I think Bostrom’s argument applies even if they aren’t “highly accurate”. If they are simulated at all, you can apply his argument. I think the core of his argument is that if simulated minds outnumber “real” minds, then it’s likely we are all simulated. I’m not really sure how us being “accurately simulated” minds changes things. It does make it easier to reason outside of our little box—if we are highly accurate simulations then we can actually know a lot about the real universe, and in fact studying our little box is pretty much akin to studying the real universe.
This, I think, is a possible difference between your position and Bostrom’s. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post).
Let’s assume I’m trying to make conclusions about the universe. I could be a brain in a vat, but there’s not really anything to be gained in assuming that. Whether it’s true or not, I may as well act as if the universe can be understood. Let’s say I conclude, from my observations about the universe, that there are many more simulated minds than non-simulated minds. Does it then follow that I am probably a simulated mind? Bostrom says yes. I say no, because my reasoning about the universe that led me to the conclusion that there are more simulated minds than non-simulated ones is predicated on me not being a simulated mind. I would almost say it’s impossible to reason your way into believing you’re in a simulation. It’s self-referential.
I’m going to have to think about this harder, but try and criticise what I’m saying as you have been doing because it certainly helps flesh things out in my mind.
I think Bostrom’s argument applies even if they aren’t “highly accurate”. If they are simulated at all, you can apply his argument.
I don’t think that’s true. The SSA will have different consequences if the simulated minds are expected to be very different from ours.
If we suppose that simulated minds will have very different observations, experiences and memories from our own, and we consider the hypothesis that the vast majority of minds in our universe will be simulated, then SSA simply disconfirms the hypothesis. If I should reason as if I am a random sample from the pool of all observers, then any theory which renders my observations highly atypical will be heavily disconfirmed. SSA will simply tell us it is unlikely that the vast majority of minds are simulated. Which means that either civilizations don’t get to the point of simulating minds or they choose not to run a significant number of simulations.
If, on the other hand, we suppose that a significant proportion of simulated minds will be quite similar to our own, with similar thoughts, memories and experiences, and we further assume that the vast majority of minds in the universe are simulated, then SSA tells us that we are likely simulated minds. It is only under those conditions that SSA delivers this verdict.
This is why, when Bostrom describes the Simulation Argument, he focuses on “ancestor-simulations”. In other words, he focuses on post-human civilizations running detailed simulations of their evolutionary history, not just simulations of any arbitrary mind. It is only under the assumption that post-human civilzations run ancestor-simulations that the SSA can be used to conclude that we are probably simulations (assuming that the other two possible conclusions of the argument are rejected).
So i think it matters very much to the argument that the simulated minds are a lot like the actual minds of the simulators’ ancestors. If not, the argument does not go through. This is why I said you seem to simply be accepting (2), the conclusion that post-human civilizations will not run a significant number of ancestor-simulations. Your position seems to be that the simulations will probably be radically dissimilar to the simulators (or their ancestors). That is equivalent to accepting (2), and does not conflict with the simulation argument.
You seem to consider the Simulation Argument similar to the Boltzmann brain paradox, which would raise the same worries about empirical incoherence that arise in that paradox, worries you summarize in the parent post. The reliability of the evidence that seems to point to me being a Boltzmann brain ts itself predicated on me not being a Boltzmann brain. But the restriction to ancestor-simulations makes the Simulation Argument importantly different from the Boltzmann brain paradox.
If we are in a simulation, all of our logic will not be universal but instead will be a reaction to the perverted rules set up by the simulation’s creators.
While I do not agree on the conclusion of the simulation argument, I think your rebuttal is flawed: we can safely reason about the reality outside simulation if we presume that we are inside a realistic simulation, that is a simulation whose purpose is to mimic as closely as possible the reality outside. I don’t know if it’s made explicit in the exposition you read, but I’ve always assumed the argument was about a realistic simulation. Indeed, if the law of physics are computable, you can have even have an emulation argument.
you cannot reason about without our universe from within our universe. It doesn’t make sense to do so.
Of course you can. Anyone who talks about any sort of ‘multiverse’ - or even causally disconnected regions of ‘our own universe’ - is doing precisely this, whether they realize it or not.
It sounds like you expect it to be obvious, but nothing springs to mind. Perhaps you should actually describe the insane reasoning or conclusion that you believe follows from the premise.
We could have random number generators that choose the geometry an agent in our simulation finds itself in every time it steps into a new room. We could make the agent believe that when you put two things together and group them, you get three things. We could add random bits to an agent’s memory.
There is no limit to how perverted a view of the world a simulated agent could have.
Hm. Let me try to restate that to make sure I follow you.
Consider three categories of environments: (Er) real environments, (Esa) simulated environments that closely resemble Er, aka “ancestral simulations”, and (Esw) simulated environments that dont’t closely resemble Er, aka “weird simulations.”
The question is, is my current environment E in Er or not?
Bostrom’s argument as I understand it is that if post-human civilizations exist and create many Esa-type environments, then for most E, (E in Esa) and not (E in Er). Therefore, given that premise I should assume (E in Esa).
Your counterargument as I understand it is that if (E in Esw) then I can draw no sensible conclusions about Er or Esa, because the logic I use might not apply to those domains, so given that premise I should assume nothing.
The “simulation argument” by Bostrom is flawed. It is wrong. I don’t understand why a lot of people seem to believe in it. I might do a write up of this if anyone agrees with me, but basically, you cannot reason about without our universe from within our universe. It doesn’t make sense to do so. The simulation argument is about using observations from within our own reality to describe something outside our reality. For example, simulations are or will be common in this universe, therefore most agents will be simulated agents, therefore we are simulated agents. However, the observation that most agents will eventually be or already are simulated only applies in this reality/universe. If we are in a simulation, all of our logic will not be universal but instead will be a reaction to the perverted rules set up by the simulation’s creators. If we’re not in a simulation, we’re not in a simulation. Either way, the simulation argument is flawed.
First, Bostrom is very explicit that the conclusion of his argument is not “We are probably living in a simulation”. The conclusion of his argument is that at least one of the following three claims is very likely to be true -- (1) humans won’t reach the post-human stage of technological development, (2) post-human civilizations will not run a significant number of simulations of their ancestral history, or (3) we are living in a simulation.
Second, Bostrom has addressed the objection you raise here (in his Simulation Argument FAQ, among other places). He essentially flips your disjunctive reasoning around. He argues that we are either in a simulation or we are not. if we are in simulation, then claim 3 is obviously true, by hypothesis. If we are not in a simulation, then our ordinary empirical evidence is a veridical guide to the universe (our universe, not some other universe). This means the evidence and assumptions used as the basis for the simulation argument are sound in our universe. It follows that since claim 3 is false by hypothesis, either claim 1 or claim 2 is very likely to be true. It’s worth noting that these two are claims about our universe, not about some parent universe.
In other words, your objection is based on the argument that if we are in a simulation, there is no good reason to trust the assumptions of the simulation argument (such as assumptions about how our simulators will behave). Bostrom’s reply is that if we are in a simulation, then his conclusion is true anyway, even if the specific reasoning he uses doesn’t apply. If we are not in a simulation, then the reasoning he uses does apply, so his conclusion is still true.
There does seem to be some sort of sleight-of-mind going on here, if you want my opinion. I generally feel that way about most non-trivial uses of anthropic reasoning. But the exact source of the sleight is not easy for me to detect. At the very least, Bostrom has a prima facie response to your objection, so you need to say something about why his response is flawed. Making your objection and Bostrom’s response mathematically precise would be a good way to track down the flaw (if any).
I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.
So I am struggling to understand his reply to my argument. In some ways it simply looks like he’s saying either we are in a simulation or we are not, which is obviously true. The claim that we are probably living in a simulation (given a couple of assumptions) relies on observations of the current universe, which either are not reliable if we are in a simulation, or obviously are wrong if we aren’t in a simulation.
If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.
He’s saying that (3) doesn’t hold if we are not in a simulation, so either (1) or (2) is true. He’s not saying that if we’re not in a simulation, we somehow are actually in a simulation given this logic.
Just wanted to point out that this is not necessarily true; in a large enough multiverse, there would be many identical copies of a mind, some of which would probably be “real minds” dwelling in “real brains”, and some would be simulated.
(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.
Right. When I say “his conclusion is still true”, I mean the conclusion that at least one of (1), (2) or (3) is true. That is the conclusion of the simulation argument, not “we are living in a simulation”.
This, I think, is a possible difference between your position and Bostrom’s. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post).
I actually suspect that you are doing neither of these things, though. You seem to be simply denying that the minds our post-human descendants will simulate (if any) will be similar to our own minds. This is what your game AI comparisons suggest. In that case, your argument is not incompatible with Bostrom’s conclusion. Remember, the conclusion of the simulation argument is that either (1), (2), or (3) is true. You seem to be saying that (2) is true—that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants. If that’s all you’re claiming, then you’re not disagreeing with the simulation argument.
The negation of (1) and (2) are premises if the conclusion is (3). So when I say they are “true” I mean that, for example, in the first case, that humans WILL reach an advanced level of technological development. Probably a bit confusing, my mistake.
I think Bostrom’s argument applies even if they aren’t “highly accurate”. If they are simulated at all, you can apply his argument. I think the core of his argument is that if simulated minds outnumber “real” minds, then it’s likely we are all simulated. I’m not really sure how us being “accurately simulated” minds changes things. It does make it easier to reason outside of our little box—if we are highly accurate simulations then we can actually know a lot about the real universe, and in fact studying our little box is pretty much akin to studying the real universe.
Let’s assume I’m trying to make conclusions about the universe. I could be a brain in a vat, but there’s not really anything to be gained in assuming that. Whether it’s true or not, I may as well act as if the universe can be understood. Let’s say I conclude, from my observations about the universe, that there are many more simulated minds than non-simulated minds. Does it then follow that I am probably a simulated mind? Bostrom says yes. I say no, because my reasoning about the universe that led me to the conclusion that there are more simulated minds than non-simulated ones is predicated on me not being a simulated mind. I would almost say it’s impossible to reason your way into believing you’re in a simulation. It’s self-referential.
I’m going to have to think about this harder, but try and criticise what I’m saying as you have been doing because it certainly helps flesh things out in my mind.
I don’t think that’s true. The SSA will have different consequences if the simulated minds are expected to be very different from ours.
If we suppose that simulated minds will have very different observations, experiences and memories from our own, and we consider the hypothesis that the vast majority of minds in our universe will be simulated, then SSA simply disconfirms the hypothesis. If I should reason as if I am a random sample from the pool of all observers, then any theory which renders my observations highly atypical will be heavily disconfirmed. SSA will simply tell us it is unlikely that the vast majority of minds are simulated. Which means that either civilizations don’t get to the point of simulating minds or they choose not to run a significant number of simulations.
If, on the other hand, we suppose that a significant proportion of simulated minds will be quite similar to our own, with similar thoughts, memories and experiences, and we further assume that the vast majority of minds in the universe are simulated, then SSA tells us that we are likely simulated minds. It is only under those conditions that SSA delivers this verdict.
This is why, when Bostrom describes the Simulation Argument, he focuses on “ancestor-simulations”. In other words, he focuses on post-human civilizations running detailed simulations of their evolutionary history, not just simulations of any arbitrary mind. It is only under the assumption that post-human civilzations run ancestor-simulations that the SSA can be used to conclude that we are probably simulations (assuming that the other two possible conclusions of the argument are rejected).
So i think it matters very much to the argument that the simulated minds are a lot like the actual minds of the simulators’ ancestors. If not, the argument does not go through. This is why I said you seem to simply be accepting (2), the conclusion that post-human civilizations will not run a significant number of ancestor-simulations. Your position seems to be that the simulations will probably be radically dissimilar to the simulators (or their ancestors). That is equivalent to accepting (2), and does not conflict with the simulation argument.
You seem to consider the Simulation Argument similar to the Boltzmann brain paradox, which would raise the same worries about empirical incoherence that arise in that paradox, worries you summarize in the parent post. The reliability of the evidence that seems to point to me being a Boltzmann brain ts itself predicated on me not being a Boltzmann brain. But the restriction to ancestor-simulations makes the Simulation Argument importantly different from the Boltzmann brain paradox.
While I do not agree on the conclusion of the simulation argument, I think your rebuttal is flawed: we can safely reason about the reality outside simulation if we presume that we are inside a realistic simulation, that is a simulation whose purpose is to mimic as closely as possible the reality outside. I don’t know if it’s made explicit in the exposition you read, but I’ve always assumed the argument was about a realistic simulation. Indeed, if the law of physics are computable, you can have even have an emulation argument.
Of course you can. Anyone who talks about any sort of ‘multiverse’ - or even causally disconnected regions of ‘our own universe’ - is doing precisely this, whether they realize it or not.
No. Think about what sort of conclusions an AI in a game we make would come to about reality. Pretty twisted, right?
It sounds like you expect it to be obvious, but nothing springs to mind. Perhaps you should actually describe the insane reasoning or conclusion that you believe follows from the premise.
We could have random number generators that choose the geometry an agent in our simulation finds itself in every time it steps into a new room. We could make the agent believe that when you put two things together and group them, you get three things. We could add random bits to an agent’s memory.
There is no limit to how perverted a view of the world a simulated agent could have.
Hm. Let me try to restate that to make sure I follow you.
Consider three categories of environments: (Er) real environments, (Esa) simulated environments that closely resemble Er, aka “ancestral simulations”, and (Esw) simulated environments that dont’t closely resemble Er, aka “weird simulations.”
The question is, is my current environment E in Er or not?
Bostrom’s argument as I understand it is that if post-human civilizations exist and create many Esa-type environments, then for most E, (E in Esa) and not (E in Er). Therefore, given that premise I should assume (E in Esa).
Your counterargument as I understand it is that if (E in Esw) then I can draw no sensible conclusions about Er or Esa, because the logic I use might not apply to those domains, so given that premise I should assume nothing.
Have I understood you?