If Omega runs a simulation of intelligent agents, it is presumable that Omega is interested in finding out with sufficient accuracy what those agents would do if they were in the real situation. But once we assign a nonzero chance that we’re being simulated, and incorporate that possibility into our decision theories, we’ve corrupted the experiment because we’re metagaming: we’re no longer behaving as if we were in the real situation. Once we suspect we’re being simulated, we’re no longer useful as a simulation, which might entail that every simulated civilization that develops simulation theories runs the risk of having its simulation shut down.
I suppose the best thing to do is to tell you to shut up now, right?
This (your hypothesis) appears wrong, however. Assuming the simulation is accurate, the fact that we can think about the simulation hypothesis means that whatever is being simulated would also think about it. If there’s an accuracy deficiency, it’s no more likely to manifest itself around the simulation-hypothesis than any other difference in accuracy.
Although that depends on how we come by the hypothesis. If we come by it like our world did, which is philosophers and other people making argument without any evidence, then there’s no special reason for us to diverge from the simulated, but if we would have evidence (like the kind proposed in http://arxiv.org/abs/1210.1847 or similar proposals) then we would have a reason to believe that we weren’t an exact simulation. In that case, we’d also have evidence of the simulation and not been shut down, so we’d know that your theory is wrong. OTOH, if you’re correct we shouldn’t try to test the simulation hypothesis experimentally.
PSA: Thinking a thought that might cause you to have never existed, might cause you to have never existed. You might think that you are thinking that thought, but that’s just how the logically impossible hypothetical of thinking it feels like from the inside. Think twice before you hypothetically think it.
(P.S. Noticing that you are certain to be right to worry about it seems to be an example of such a thought, for our world. Like correctly believing anything else that’s false in a suitable sense. As far as I know.)
How would you act differently even if we assume that your whole life merely exists inside a simulation? You still have to live the life you’ve been given—it’s not like you can break out of the simulation and go take your real life back. Your actions in the simulation still have their usual effect on the life in the simulation. The only case where it matters is if the simulator wants you to behave certain ways and will reward you accordingly(either real-you or by moving you to a nicer simulation), but that’s just a different way to talk about religion.
Imagine that you learn tomorrow that we’re in a simulation, because scientists did a test and found a bug in the program. Perhaps you would act differently? Maybe email all your friends about it, head over to lesswrong to discuss it, whatever. These things wouldn’t happen in the original.
The main distinction is the way you’d learn about the simulation, like I said in my response.
Please define the difference between “bug in the simulation” and “previously unknown law of physics”.
That said, I do agree in principle. However, simulation theories are sufficiently obvious(at least to creatures that dream/build computers/etc.) that they can’t count as corruption—it’d be weirder for a simulated civilization to not have them.
Please define the difference between “bug in the simulation” and “previously unknown law of physics”.
There have been plausible tests given that would seem to produce Bayesian evidence of simulation. To give an analogy, if tomorrow you’d hear a loud voice coming from Mount Sinai reciting the 10 commandments, more of your probability would go to the theory “The Bible is more-or-less true and God’s coming back to prove it” than “there’s a law of physics that makes sounds like this one happen at random times”. The same way, there are observations that are strictly more likely to occur if we’re in a simulation than if not. There are some proposed in http://arxiv.org/abs/1210.1847 , and other places as well.
The same way, there are observations that are strictly more likely to occur if we’re in a simulation than if not.
This is not true in general. This is true for some particular kinds of simulations (e.g. your link says “we assume that our universe is an early numerical simulation with unimproved Wilson fermion discretization”), but not all of them.
Let’s rephrase: our expectations are different conditioning on simulation than on ~simulation.
The probability distribution of observations over possible simulation types is different from the probability distribution of observations over possible physics laws. If you disagree, then you need to hold that exactly the right kinds of simulations (with opposite effects) have exactly the right kind of probability to cancel out the effects of “particular kinds of simulations”. That seems a very strong claim which needs defending. Otherwise, there do exist possible observations which would be Bayesian evidence for simulation.
our expectations are different conditioning on simulation than on ~simulation
I don’t think mine are.
The probability distribution of observations over possible simulation types is different from the probability distribution of observations over possible physics laws.
That is a content-free statement. You have no idea about either of the distributions, about what “possible simulation types” there might be, or what “possible physics laws” might be.
there do exist possible observations which would be Bayesian evidence for simulation
Well, barring things which actually break the simulation (e.g. an alien teenager appearing in the sky and saying that his parents are making him shut off this sim, so goodbye all y’all), can you give me an example?
Any of the things proposed in papers with the same aims of the one I linked above. The reason I’m not giving specifics is because I don’t know enough of the technical points made to discuss them properly.
I wouldn’t be the one making the observations, physicists would, so my observation is “physicists announce a test which shows that we are likely to be living in a simulation” and it gets vetted by people with technical knowledge, replicated with better p-values, all the recent Nobel Physics prize winners look over it and confirm, etc. (Note: I’m explicitly outlawing something which uses philosophy/anthropics/”thinking about physics”. Only actual experiments. Although I’d expect only good ones to get past the bar I set, anyway, so that may not be needed.) I couldn’t judge myself whether the results mean anything, so I’d rely on consensus of physicists.
Using that observation: are you really telling me that your P(physicists announce finding evidence of simulation| simulation) == P(physicists announce finding evidence of simulation| ~simulation)?
I wouldn’t be the one making the observations, physicists would
Ugh, so all you have in an argument to authority? A few centuries ago the scientists had a consensus that God exists. And?
are you really telling me that your P(physicists announce finding evidence of simulation| simulation) == P(physicists announce finding evidence of simulation| ~simulation)?
No, I’m telling you that “evidence of simulation” is an expression which doesn’t mean anything to me.
To go back to Alsadius’ point, how are you going to distinguish between “this is a feature of the simulation” and “this is how the physical world works”?
I gave my observation, which is basically deferring to physicists.
“evidence of simulation” may not mean anything to you, but surely “physicists announce finding evidence of simulation” means something to you? Could you give an example of something that could happen where you wouldn’t be sure whether it counted as “physicists announce finding evidence of simulation”?
how are you going to distinguish between “this is a feature of the simulation” and “this is how the physical world works”
Right now, as I’m not trained in physics, I’d defer to the consensus of experts. I expect someone who wrote those kinds of papers would have a better answer for you.
Or is your problem of defining “evidence of simulation” something you’d complain about even if real experts used that in a paper?
Which is why I laid out a bunch of additional steps needed above:
my observation is “physicists announce a test which shows that we are likely to be living in a simulation” and it gets vetted by people with technical knowledge, replicated with better p-values, all the recent Nobel Physics prize winners look over it and confirm, etc.
You seem to be taking parts of my argument out of context.
I do not subscribe to the esoteric-knowledge-available-only-to-high-priests view of science.
Me neither, but I’m trying to use a hypothetical paper as a proxy because I’m not well versed enough to talk about specifics. On some level you have to accept arguments from authority. (Or do you either reject quantum mechanics or have seen evidence yourself?) Imagine that simulation was as well established in physics as quantum mechanics is now. I find it very hard to say that that occurrence is completely orthogonal to the truth of simulation.
On some level you have to accept arguments from authority.
The problem is that you offer nothing but an argument from authority.
have seen evidence yourself?
Well, of course I have. The computer I use to type this words relies on QM to work, the dual wave-particle nature of light is quite apparent in digital photography, NMR machines in hospitals do work, etc.
In any case, let me express my position clearly.
I do not believe it possible to prove we’re NOT living in a simulation.
The question is whether it’s possible to prove we ARE living in a simulation is complex. Part of the complexity involves the meaning of “simulation” in this context. For example, if we assume that there is an omnipotent Creator of the universe, can we call this universe “a simulation”? It might be possible to test whether we are in a specific kind of simulation (see the paper you linked to), but I don’t think it’s possible to test whether we are in some, unspecified, unknown simulation.
My position is that it is possible for us to get both Bayesian evidence for and against simulation. I was not talking at all about “proof” in the sense you seem to use it.
If it’s possible to get evidence for a “specific kind of simulation”, then lacking that evidence is weak evidence against simulation. If we test many different possible simulation hypotheses and don’t find anything, that’s slightly stronger evidence. It’s inconsistent to say that we can’t prove ~simulation but can prove simulation.
The computer I use to type this words relies on QM to work, the dual wave-particle nature of light is quite apparent in digital photography, NMR machines in hospitals do work, etc.
I’m curious if you understand QM well enough to say that computers wouldn’t work without it. Is there no possible design for computers in classical physics that we would recognize as computer? Couldn’t QM be false and all these things work differently, and you’d have no way of knowing? Whatever you say, I doubt there are no areas in your life where you just rely on authority without understanding the subject. If not physics, then medicine, or something else.
Is there no possible design for computers in classical physics that we would recognize as computer?
Of course there is—from Babbage to the mechanical calculators or the mid-XX century. But I didn’t mean computers in general—I meant the specific computer that I’m typing these words on, the computer that relies on semiconductor microchips.
Although I can’t think of any way that I personally would behave differently based on a belief that I exist in a simulation, Nick Bostrom suggests a pretty interesting reason why an AI might, in chapter 9 of Superintelligence (in Box 8). Specifically, an AI that assigns a non-zero probability to the belief that it might exist in a simulated universe might choose not to “escape from the box” out of a concern that whoever is running the simulation might shut down the simulation if an AI within the simulation escapes from the box or otherwise exhibits undesirable behavior. He suggests that the threat of a possibly non-existent simulator could be effectively exploited to keep an AI “inside of the box”.
Unless there’s a flow of information from outside the simulation to inside of it, you have zero evidence of what would cause the simulators to shut down the machine. Trying to guess is futile.
Bostrom suggested that a simulation containing an AI that is expanding throughout (and beyond) the galaxy and utilizing resources at a galactic level would be more expensive from a computational standpoint than a simulation that did not contain such an AI. Presumably this would be the case because a simulator would take computational shortcuts and simulate regions of the universe that are not being observed at a much coarser granularity than those parts that are being observed. So, the AI might reason that the simulation in which it lives would grow too expensive computationally for the simulator to continue to run. And, since having the simulation shut down would presumably interfere with the AI achieving its goals, the AI would seek to avoid that possibility.
There’s no reason to limit simulation to one level, nor to privilege “real” as any special thing. All reality is emergent from a set of (highly complex, or maybe not) rules. This is true of n=0 (“reality”, or “the natural simulation”), as well as every n+1 (where a level N entity simulates something).
It’s turtles all the way up.
Put another way, the simulation parent entities wonder if they’re being simulated, so it’s exactly proper for the simulation target entities to wonder, for exactly the same reasons. I suspect that in every universe, thinking processes that can consider simulation will consider that they might be simulated.
I don’t know if they’ll reach the conclusion that it doesn’t matter—finding the boundaries of the simulation is exactly identical to finding the boundaries of a “natural” universe, and we’re gonna try to do so.
Crazy hypothesis:
If Omega runs a simulation of intelligent agents, it is presumable that Omega is interested in finding out with sufficient accuracy what those agents would do if they were in the real situation. But once we assign a nonzero chance that we’re being simulated, and incorporate that possibility into our decision theories, we’ve corrupted the experiment because we’re metagaming: we’re no longer behaving as if we were in the real situation. Once we suspect we’re being simulated, we’re no longer useful as a simulation, which might entail that every simulated civilization that develops simulation theories runs the risk of having its simulation shut down.
I suppose the best thing to do is to tell you to shut up now, right?
This (your hypothesis) appears wrong, however. Assuming the simulation is accurate, the fact that we can think about the simulation hypothesis means that whatever is being simulated would also think about it. If there’s an accuracy deficiency, it’s no more likely to manifest itself around the simulation-hypothesis than any other difference in accuracy.
Although that depends on how we come by the hypothesis. If we come by it like our world did, which is philosophers and other people making argument without any evidence, then there’s no special reason for us to diverge from the simulated, but if we would have evidence (like the kind proposed in http://arxiv.org/abs/1210.1847 or similar proposals) then we would have a reason to believe that we weren’t an exact simulation. In that case, we’d also have evidence of the simulation and not been shut down, so we’d know that your theory is wrong. OTOH, if you’re correct we shouldn’t try to test the simulation hypothesis experimentally.
PSA: Thinking a thought that might cause you to have never existed, might cause you to have never existed. You might think that you are thinking that thought, but that’s just how the logically impossible hypothetical of thinking it feels like from the inside. Think twice before you hypothetically think it.
(P.S. Noticing that you are certain to be right to worry about it seems to be an example of such a thought, for our world. Like correctly believing anything else that’s false in a suitable sense. As far as I know.)
How would you act differently even if we assume that your whole life merely exists inside a simulation? You still have to live the life you’ve been given—it’s not like you can break out of the simulation and go take your real life back. Your actions in the simulation still have their usual effect on the life in the simulation. The only case where it matters is if the simulator wants you to behave certain ways and will reward you accordingly(either real-you or by moving you to a nicer simulation), but that’s just a different way to talk about religion.
Imagine that you learn tomorrow that we’re in a simulation, because scientists did a test and found a bug in the program. Perhaps you would act differently? Maybe email all your friends about it, head over to lesswrong to discuss it, whatever. These things wouldn’t happen in the original.
The main distinction is the way you’d learn about the simulation, like I said in my response.
Please define the difference between “bug in the simulation” and “previously unknown law of physics”.
That said, I do agree in principle. However, simulation theories are sufficiently obvious(at least to creatures that dream/build computers/etc.) that they can’t count as corruption—it’d be weirder for a simulated civilization to not have them.
There have been plausible tests given that would seem to produce Bayesian evidence of simulation. To give an analogy, if tomorrow you’d hear a loud voice coming from Mount Sinai reciting the 10 commandments, more of your probability would go to the theory “The Bible is more-or-less true and God’s coming back to prove it” than “there’s a law of physics that makes sounds like this one happen at random times”. The same way, there are observations that are strictly more likely to occur if we’re in a simulation than if not. There are some proposed in http://arxiv.org/abs/1210.1847 , and other places as well.
This is not true in general. This is true for some particular kinds of simulations (e.g. your link says “we assume that our universe is an early numerical simulation with unimproved Wilson fermion discretization”), but not all of them.
Let’s rephrase: our expectations are different conditioning on simulation than on ~simulation.
The probability distribution of observations over possible simulation types is different from the probability distribution of observations over possible physics laws. If you disagree, then you need to hold that exactly the right kinds of simulations (with opposite effects) have exactly the right kind of probability to cancel out the effects of “particular kinds of simulations”. That seems a very strong claim which needs defending. Otherwise, there do exist possible observations which would be Bayesian evidence for simulation.
I don’t think mine are.
That is a content-free statement. You have no idea about either of the distributions, about what “possible simulation types” there might be, or what “possible physics laws” might be.
Well, barring things which actually break the simulation (e.g. an alien teenager appearing in the sky and saying that his parents are making him shut off this sim, so goodbye all y’all), can you give me an example?
Any of the things proposed in papers with the same aims of the one I linked above. The reason I’m not giving specifics is because I don’t know enough of the technical points made to discuss them properly.
I wouldn’t be the one making the observations, physicists would, so my observation is “physicists announce a test which shows that we are likely to be living in a simulation” and it gets vetted by people with technical knowledge, replicated with better p-values, all the recent Nobel Physics prize winners look over it and confirm, etc. (Note: I’m explicitly outlawing something which uses philosophy/anthropics/”thinking about physics”. Only actual experiments. Although I’d expect only good ones to get past the bar I set, anyway, so that may not be needed.) I couldn’t judge myself whether the results mean anything, so I’d rely on consensus of physicists.
Using that observation: are you really telling me that your P(physicists announce finding evidence of simulation| simulation) == P(physicists announce finding evidence of simulation| ~simulation)?
Ugh, so all you have in an argument to authority? A few centuries ago the scientists had a consensus that God exists. And?
No, I’m telling you that “evidence of simulation” is an expression which doesn’t mean anything to me.
To go back to Alsadius’ point, how are you going to distinguish between “this is a feature of the simulation” and “this is how the physical world works”?
I gave my observation, which is basically deferring to physicists.
“evidence of simulation” may not mean anything to you, but surely “physicists announce finding evidence of simulation” means something to you? Could you give an example of something that could happen where you wouldn’t be sure whether it counted as “physicists announce finding evidence of simulation”?
Right now, as I’m not trained in physics, I’d defer to the consensus of experts. I expect someone who wrote those kinds of papers would have a better answer for you.
Or is your problem of defining “evidence of simulation” something you’d complain about even if real experts used that in a paper?
Yes, it means “somebody wanted publicity” (don’t think it would get as far as grants).
Yes, of course. I do not subscribe to the esoteric-knowledge-available-only-to-high-priests view of science.
Which is why I laid out a bunch of additional steps needed above:
You seem to be taking parts of my argument out of context.
Me neither, but I’m trying to use a hypothetical paper as a proxy because I’m not well versed enough to talk about specifics. On some level you have to accept arguments from authority. (Or do you either reject quantum mechanics or have seen evidence yourself?) Imagine that simulation was as well established in physics as quantum mechanics is now. I find it very hard to say that that occurrence is completely orthogonal to the truth of simulation.
The problem is that you offer nothing but an argument from authority.
Well, of course I have. The computer I use to type this words relies on QM to work, the dual wave-particle nature of light is quite apparent in digital photography, NMR machines in hospitals do work, etc.
In any case, let me express my position clearly.
I do not believe it possible to prove we’re NOT living in a simulation.
The question is whether it’s possible to prove we ARE living in a simulation is complex. Part of the complexity involves the meaning of “simulation” in this context. For example, if we assume that there is an omnipotent Creator of the universe, can we call this universe “a simulation”? It might be possible to test whether we are in a specific kind of simulation (see the paper you linked to), but I don’t think it’s possible to test whether we are in some, unspecified, unknown simulation.
My position is that it is possible for us to get both Bayesian evidence for and against simulation. I was not talking at all about “proof” in the sense you seem to use it.
If it’s possible to get evidence for a “specific kind of simulation”, then lacking that evidence is weak evidence against simulation. If we test many different possible simulation hypotheses and don’t find anything, that’s slightly stronger evidence. It’s inconsistent to say that we can’t prove ~simulation but can prove simulation.
I’m curious if you understand QM well enough to say that computers wouldn’t work without it. Is there no possible design for computers in classical physics that we would recognize as computer? Couldn’t QM be false and all these things work differently, and you’d have no way of knowing? Whatever you say, I doubt there are no areas in your life where you just rely on authority without understanding the subject. If not physics, then medicine, or something else.
Of course there is—from Babbage to the mechanical calculators or the mid-XX century. But I didn’t mean computers in general—I meant the specific computer that I’m typing these words on, the computer that relies on semiconductor microchips.
How do you know your computer relies on semiconductor microchips? Could you explain to me why semiconductor microchips require QM to work?
I looked :-)
See e.g. this.
Although I can’t think of any way that I personally would behave differently based on a belief that I exist in a simulation, Nick Bostrom suggests a pretty interesting reason why an AI might, in chapter 9 of Superintelligence (in Box 8). Specifically, an AI that assigns a non-zero probability to the belief that it might exist in a simulated universe might choose not to “escape from the box” out of a concern that whoever is running the simulation might shut down the simulation if an AI within the simulation escapes from the box or otherwise exhibits undesirable behavior. He suggests that the threat of a possibly non-existent simulator could be effectively exploited to keep an AI “inside of the box”.
Unless there’s a flow of information from outside the simulation to inside of it, you have zero evidence of what would cause the simulators to shut down the machine. Trying to guess is futile.
Bostrom suggested that a simulation containing an AI that is expanding throughout (and beyond) the galaxy and utilizing resources at a galactic level would be more expensive from a computational standpoint than a simulation that did not contain such an AI. Presumably this would be the case because a simulator would take computational shortcuts and simulate regions of the universe that are not being observed at a much coarser granularity than those parts that are being observed. So, the AI might reason that the simulation in which it lives would grow too expensive computationally for the simulator to continue to run. And, since having the simulation shut down would presumably interfere with the AI achieving its goals, the AI would seek to avoid that possibility.
Observed by what? For this to make sense there’d need to be no life anywhere in the universe but here that could be relevant to the simulation.
Actually, all it requires is that the universe is somewhat sparsely populated—there is no requirement that there must be no life anywhere but here.
Furthermore, for all we know, maybe there is no life in the universe anywhere but here.
There’s no reason to limit simulation to one level, nor to privilege “real” as any special thing. All reality is emergent from a set of (highly complex, or maybe not) rules. This is true of n=0 (“reality”, or “the natural simulation”), as well as every n+1 (where a level N entity simulates something).
It’s turtles all the way up.
Put another way, the simulation parent entities wonder if they’re being simulated, so it’s exactly proper for the simulation target entities to wonder, for exactly the same reasons. I suspect that in every universe, thinking processes that can consider simulation will consider that they might be simulated.
I don’t know if they’ll reach the conclusion that it doesn’t matter—finding the boundaries of the simulation is exactly identical to finding the boundaries of a “natural” universe, and we’re gonna try to do so.
However, see my point about how the method of learning about the simulation matters for a imperfect-fidelity simulation.
Any being that does not at some point consider the possibility that it is inside a simulation, is not worth simulating.