Most poeple disagree with the premise “Being in a simulation is/can be made to be indistinguishable from reality from the point of view of the simulee.”
I am surprised to hear this. What is your basis for claiming that this is the premise most people object to?
Also, if you are aware of or familiar with this objection—would you mind explaining the following questions I have regarding it?
What reason is there to suspect that a simulated me would have a different/distinguishable experience from real me?
What reason is there to suspect that if there were differences between simulated and real life, that a simulated life would be aware of those differences? That is, even if it is distinguishable—I have only experienced one kind of life and can’t say if my totally distinguishable experience of life is that of a simulated life or a real one.
A magic super computer from the future will be able to simulate one atom with arbitrary accuracy—right? A super-enough computer will be able to simulate many atoms interacting with arbitrary accuracy. If this super computer is precisely simulating all the atoms of an empty room containing a single human being (brain included). If this simulation is happening—how could the simulated being possibly have a different experience than its real counterpart in an empty room? Atomically speaking everything is identical.
Maybe questions 1 and 3 are similar—but I’d appreciate if you (or someone else) could enlighten me regarding these issues.
What reason is there to suspect that a simulated me would have a different/distinguishable experience from real me?
As someone who has written lots of simulations, there are a few reasons.
1) The simulation deliberately simplifies or changes some things from reality. At minimum, when “noise” is required an algorithm is used to generate numbers which have many of the properties of random numbers but
a) are not in fact random,
b) are usually much more accurately described by a particular mathematical distribution than would any measurements of the actual noise in the system be.
2) The simulation accidentally simplifies/changes LOTS of things from reality. A brain simulation at the neuron level is likely to simulate observed variations using a noise generator, when these variations arise from a) a ream of detailed motions of individual ions and b) quantum interactions. The claim is generally made that one can simulate at a more and more detailed level AND GET TO THE ENDPOINT where the simulation is “perfect.” The getting to the endpoint claim is not only unproven, but highly suspect. At every level of physics we have investigated so far, we have always found a deeper level. Further, the equations of motions at these deepest layers are not known in complete detail. So even if we can get to an endpoint, we have no reason to believe we have gotten to the endpoint in any given simulation. At some point, we are no longer compute bound, we are knowledge bound.
3) There is a great insight in software that “if it isn’t tested, its broken.” How do you even test a supremely deep simulation of yourself? If there are features of yourself you are still learning, you can’t test for them. Until you comprehensievly comprehend yourself, you can never know that a simulation was comprehensively similar.
Even something as simple as a coin toss simulation is likely to be “wrong” in detail. Perhaps you know the coin toss you are actually simulating has .500 or even .500000000 probability of giving heads (where number of zeros represents accuracy to which you know it.) But what is your confidence that the true expectation is 0.5 with a googleplex zeros following (or 3^^3 zeros to pretend to try to fit in here) is the experimental fact? Even 64 zeros would be a bitch to prove. And what are the chances that your simulation gets a ’true expectation” of 0.5 with even 64 zeros after it? With the coin toss, the variance might SEEM trivial, but consider the same uncertainty in the human. You need to predict my next post keystroke for keystroke, which necessarily includes a prediction of whether I will eat an egg for breakfast or a bowel of cereal because the posts I read while eating depend on that. And so on and so on.
My claim is that the existence of an endpoint in finally getting the simulation complete is at best vastly beyond our knowledge (and not in a compute bound way) and at worst simply unknowable for a ream of good reasons. My estimate of the probability that a simulation will ever be reliably known is < 0.01%.
Now we may get to a much easier place: good enough to convince others. That someone can write a simulation of me that cannot be distinguished from me by people who know me is a much lower bar than that the simulatino feels the same as me to itself. To convince others, the simulation may not even have to be conscious, for example. But you are going to have to build your simulation in to a fat human body good enough to fool my wife, and give it a variety of nervous and personality disorders that cause it to come up with digs that are deeply disturbing to her to do even that.
At some point, the comprehensive difficulty of a problem has to open the question: is it reasonable to sweep this under the rug by appealing to an unknown future of much greater capability than we have now, or is doing that a human bias we may need to avoid?
I think enough people are non-reductionist/materialist to have doubt about whether a simulation can be said to have experiences. And we don’t exactly have demonstration of this at this time, do we? I mean, in the example cited, Cvilization PC games, there aren’t individuals there to have experiences (unless one counts the ai which is running the entire faction), rather there are some blips in databases incrementing the number of units here or there, or raising the population from an abstract 6 to 7. I don’t think people will be able to take simulation theory seriously until they have personal interaction with a convincing ai.
That’s probably as much an answer as I can give for any of the questsions, other than that I don’t see why we can assume that magic super computers are plausible. Related, I don’t know if I trust my intuition or reasoning as to whether an infinite simulation will resemble realty in every way (assuming the supercomputer is running equations and databases, etc, rather than actually reconstucting a universe atom by atom or something).
It feels like you’re asking me to believe that a map is the same as the territory if it is a good enough map. I know that’s just an analogy, but I have a hard time comprehending the sentence that “reality is the solution to/ equations and nothing more” (as opposed to even “reality is predictable by equations”).
This is probably not the LW approved answer, but then, I did say most people and not most LW-ers.
I don’t understand subjective experience very well, so I don’t know if a simulation would have it. I know that an adult human brain does, and I’m pretty sure a rock doesn’t, but there are other cases I’m much less certain about. Mice, for example.
I am surprised to hear this. What is your basis for claiming that this is the premise most people object to?
Also, if you are aware of or familiar with this objection—would you mind explaining the following questions I have regarding it?
What reason is there to suspect that a simulated me would have a different/distinguishable experience from real me?
What reason is there to suspect that if there were differences between simulated and real life, that a simulated life would be aware of those differences? That is, even if it is distinguishable—I have only experienced one kind of life and can’t say if my totally distinguishable experience of life is that of a simulated life or a real one.
A magic super computer from the future will be able to simulate one atom with arbitrary accuracy—right? A super-enough computer will be able to simulate many atoms interacting with arbitrary accuracy. If this super computer is precisely simulating all the atoms of an empty room containing a single human being (brain included). If this simulation is happening—how could the simulated being possibly have a different experience than its real counterpart in an empty room? Atomically speaking everything is identical.
Maybe questions 1 and 3 are similar—but I’d appreciate if you (or someone else) could enlighten me regarding these issues.
As someone who has written lots of simulations, there are a few reasons.
1) The simulation deliberately simplifies or changes some things from reality. At minimum, when “noise” is required an algorithm is used to generate numbers which have many of the properties of random numbers but a) are not in fact random, b) are usually much more accurately described by a particular mathematical distribution than would any measurements of the actual noise in the system be.
2) The simulation accidentally simplifies/changes LOTS of things from reality. A brain simulation at the neuron level is likely to simulate observed variations using a noise generator, when these variations arise from a) a ream of detailed motions of individual ions and b) quantum interactions. The claim is generally made that one can simulate at a more and more detailed level AND GET TO THE ENDPOINT where the simulation is “perfect.” The getting to the endpoint claim is not only unproven, but highly suspect. At every level of physics we have investigated so far, we have always found a deeper level. Further, the equations of motions at these deepest layers are not known in complete detail. So even if we can get to an endpoint, we have no reason to believe we have gotten to the endpoint in any given simulation. At some point, we are no longer compute bound, we are knowledge bound.
3) There is a great insight in software that “if it isn’t tested, its broken.” How do you even test a supremely deep simulation of yourself? If there are features of yourself you are still learning, you can’t test for them. Until you comprehensievly comprehend yourself, you can never know that a simulation was comprehensively similar.
Even something as simple as a coin toss simulation is likely to be “wrong” in detail. Perhaps you know the coin toss you are actually simulating has .500 or even .500000000 probability of giving heads (where number of zeros represents accuracy to which you know it.) But what is your confidence that the true expectation is 0.5 with a googleplex zeros following (or 3^^3 zeros to pretend to try to fit in here) is the experimental fact? Even 64 zeros would be a bitch to prove. And what are the chances that your simulation gets a ’true expectation” of 0.5 with even 64 zeros after it? With the coin toss, the variance might SEEM trivial, but consider the same uncertainty in the human. You need to predict my next post keystroke for keystroke, which necessarily includes a prediction of whether I will eat an egg for breakfast or a bowel of cereal because the posts I read while eating depend on that. And so on and so on.
My claim is that the existence of an endpoint in finally getting the simulation complete is at best vastly beyond our knowledge (and not in a compute bound way) and at worst simply unknowable for a ream of good reasons. My estimate of the probability that a simulation will ever be reliably known is < 0.01%.
Now we may get to a much easier place: good enough to convince others. That someone can write a simulation of me that cannot be distinguished from me by people who know me is a much lower bar than that the simulatino feels the same as me to itself. To convince others, the simulation may not even have to be conscious, for example. But you are going to have to build your simulation in to a fat human body good enough to fool my wife, and give it a variety of nervous and personality disorders that cause it to come up with digs that are deeply disturbing to her to do even that.
At some point, the comprehensive difficulty of a problem has to open the question: is it reasonable to sweep this under the rug by appealing to an unknown future of much greater capability than we have now, or is doing that a human bias we may need to avoid?
I think enough people are non-reductionist/materialist to have doubt about whether a simulation can be said to have experiences. And we don’t exactly have demonstration of this at this time, do we? I mean, in the example cited, Cvilization PC games, there aren’t individuals there to have experiences (unless one counts the ai which is running the entire faction), rather there are some blips in databases incrementing the number of units here or there, or raising the population from an abstract 6 to 7. I don’t think people will be able to take simulation theory seriously until they have personal interaction with a convincing ai.
That’s probably as much an answer as I can give for any of the questsions, other than that I don’t see why we can assume that magic super computers are plausible. Related, I don’t know if I trust my intuition or reasoning as to whether an infinite simulation will resemble realty in every way (assuming the supercomputer is running equations and databases, etc, rather than actually reconstucting a universe atom by atom or something).
It feels like you’re asking me to believe that a map is the same as the territory if it is a good enough map. I know that’s just an analogy, but I have a hard time comprehending the sentence that “reality is the solution to/ equations and nothing more” (as opposed to even “reality is predictable by equations”).
This is probably not the LW approved answer, but then, I did say most people and not most LW-ers.
I don’t understand subjective experience very well, so I don’t know if a simulation would have it. I know that an adult human brain does, and I’m pretty sure a rock doesn’t, but there are other cases I’m much less certain about. Mice, for example.