″Arguments for the singularity are also (weak) arguments for theism.″
I’d want to know whether there is anything wrong with the following reasoning:
If the singularity is likely, then it is (somewhat less) likely that it is used to run what Bostrom calls ancestor simulation. As we cannot know the difference it follows that it is likely that we already are in a simulation.
If we are in simulation, then the physical parameters don’t necessarily follow simple rules (occams razor) but may be altered in complex ways to suit the ends of the actor running the simulation. The alteration may take may forms but one form is to allow an interaction of the actor with the simulation—somewhat like in a computer game but ‘infinitely’ more ‘real’.
The actor could choose to be a god in the simulation. Whether the actor chooses to do so in any simulation is another question which reduces the likelihood for our universe of course. He could play a bit and then loose interest for example or look at the results to compare it to models of religion (or provide hell/heaven if he so chooses).
The key point is that in a simulation-argument occams razor doesn’t apply to gods and super-natural effects anymore. In the end increasing the likelihood of or finding arguments for the singularity amounts to this taking over to support theism—albeit with the caveat that it presumes an embedding universe, though this is no new really.
It is not strictly meaningful to ask “are we in a simulation” since there are copies of us both inside simulations and outside of them. However, if it is possible to demonstrate decision problems in which the optimal decision depends on whether the problem is nested in a simulation, then it is meaningful to ask how to make the decision.
If all the copies of you that exist in a simulation either exist in complex universes (compared to the universe in which you are not in a simulation) or very late in time (so that they are strongly affected by the temporal discount in the utility function), you should behave as if you are not in a simulation.
I’m not sure if you intentionally misunderstand me—possibly to put me on a more abstract track. Or if you see that I’m discussing the issue on the same level as the simulation argument does. Namely in a szenario where we do have nested structures (simulations, simulated ‘sub universes’; of course all just part of some large mathematical structure ) and certain probability relations between these nestings hold.
I’m saying that the simulation argument is wrong because it follows from a mistaken epistemic framework (SIA). Once you switch to the correct epistemic framework (UDT) the argument dissolves.
Roughly speaking, UDT says you show make decisions as if you decide for all of your copies. So, if there are copies of you inside and outside simulations, you should take all of them into account. Now, if all the copies inside simulations are located in the far future wrt the copy outside simulations (e.g. because those copies were created by a post-human civilization) you can usually disregard them because of the temporal discount in the utility function. On the other hand, you can consider the possibility that all copies are inside simulation. But, once you go the the Tegmark IV multiverse it is not a real possibility since you can always imagine a universe in which you are not inside a simulation. The only question is the relative weight (“magic reality fluid”) of this universe. Since weight is 2^{-Kolmogorov complexity}, if the simplest hypothesis explaining your universe doesn’t involve embedding it in a simulation, you should act as if you’re not in a simulation. If the simplest hypothesis explaining your universe does involve embedding it in a simulation (e.g. because the Creator just spoke to you yesterday), you should behave as if you’re in a simulation. So Egan’s law is intact.
But going back one step: Would you agree that my argument is valid in the ‘wrong’ framework I used?
I think Occam’s razor still applies even if we are in a simulation. It’s just more difficult to apply. It would probably involve something like trying to guess the motivation of the Creators and update on that.
in a simulation-argument occams razor doesn’t apply to gods and super-natural effects
Occam still applies to the parent universe (I think). And predictions about the parent universe imply predictions about its child simulations.
So a variant of Occam (or at least, a prior over universes) still applies to the simulation. There are 2^100 more possible universes of description length 200 than of description length 100, so each 100-length universe is more probable than each 200-length universe, if the simulators are equally likely to simulate each length of universe. This fails if e.g. the simulators run every possible universe of length <300. It also fails if they try to mess with us somehow, e.g. by only picking universes that superficially look like much simpler universes.
Many would claim that these types of god and the supernatural don’t count—a lot of definitions of “supernatural” preclude things that can be reduced and follow natural laws even if those aren’t our natural laws.
On baseline of my opinion on LW topics I closed with the lemma:
″Arguments for the singularity are also (weak) arguments for theism.″
I’d want to know whether there is anything wrong with the following reasoning:
If the singularity is likely, then it is (somewhat less) likely that it is used to run what Bostrom calls ancestor simulation. As we cannot know the difference it follows that it is likely that we already are in a simulation.
If we are in simulation, then the physical parameters don’t necessarily follow simple rules (occams razor) but may be altered in complex ways to suit the ends of the actor running the simulation. The alteration may take may forms but one form is to allow an interaction of the actor with the simulation—somewhat like in a computer game but ‘infinitely’ more ‘real’.
The actor could choose to be a god in the simulation. Whether the actor chooses to do so in any simulation is another question which reduces the likelihood for our universe of course. He could play a bit and then loose interest for example or look at the results to compare it to models of religion (or provide hell/heaven if he so chooses).
The key point is that in a simulation-argument occams razor doesn’t apply to gods and super-natural effects anymore. In the end increasing the likelihood of or finding arguments for the singularity amounts to this taking over to support theism—albeit with the caveat that it presumes an embedding universe, though this is no new really.
It is not strictly meaningful to ask “are we in a simulation” since there are copies of us both inside simulations and outside of them. However, if it is possible to demonstrate decision problems in which the optimal decision depends on whether the problem is nested in a simulation, then it is meaningful to ask how to make the decision.
If all the copies of you that exist in a simulation either exist in complex universes (compared to the universe in which you are not in a simulation) or very late in time (so that they are strongly affected by the temporal discount in the utility function), you should behave as if you are not in a simulation.
Take my usage to mean: “is our most outer copy in a simulation?”
The most outer copy is never in a simulation, since the content of any simulation exists in the Tegmark IV multiverse as a universe in itself.
I’m not sure if you intentionally misunderstand me—possibly to put me on a more abstract track. Or if you see that I’m discussing the issue on the same level as the simulation argument does. Namely in a szenario where we do have nested structures (simulations, simulated ‘sub universes’; of course all just part of some large mathematical structure ) and certain probability relations between these nestings hold.
I’m saying that the simulation argument is wrong because it follows from a mistaken epistemic framework (SIA). Once you switch to the correct epistemic framework (UDT) the argument dissolves.
You might have indicated that you want to apply another framework than implied by my reference to the simulation argument.
I might agree with yours reasoning, but need more input on this:
Can you give me a ref for this? I don’t see how it ovbiously follows.
But going back one step: Would you agree that my argument is valid in the ‘wrong’ framework I used?
The best ref I could find is this
Roughly speaking, UDT says you show make decisions as if you decide for all of your copies. So, if there are copies of you inside and outside simulations, you should take all of them into account. Now, if all the copies inside simulations are located in the far future wrt the copy outside simulations (e.g. because those copies were created by a post-human civilization) you can usually disregard them because of the temporal discount in the utility function. On the other hand, you can consider the possibility that all copies are inside simulation. But, once you go the the Tegmark IV multiverse it is not a real possibility since you can always imagine a universe in which you are not inside a simulation. The only question is the relative weight (“magic reality fluid”) of this universe. Since weight is 2^{-Kolmogorov complexity}, if the simplest hypothesis explaining your universe doesn’t involve embedding it in a simulation, you should act as if you’re not in a simulation. If the simplest hypothesis explaining your universe does involve embedding it in a simulation (e.g. because the Creator just spoke to you yesterday), you should behave as if you’re in a simulation. So Egan’s law is intact.
I think Occam’s razor still applies even if we are in a simulation. It’s just more difficult to apply. It would probably involve something like trying to guess the motivation of the Creators and update on that.
That thread in inconclusive. It basically urges for an explanatory post too. But thanks for giving it.
As that is building on the conclusion I take it to mean that you basically agree.
What does follow from this result for e.g. acting depends on lots of factors I don’t want to discuss further on this thread.
Tag out.
Occam still applies to the parent universe (I think). And predictions about the parent universe imply predictions about its child simulations.
So a variant of Occam (or at least, a prior over universes) still applies to the simulation. There are 2^100 more possible universes of description length 200 than of description length 100, so each 100-length universe is more probable than each 200-length universe, if the simulators are equally likely to simulate each length of universe. This fails if e.g. the simulators run every possible universe of length <300. It also fails if they try to mess with us somehow, e.g. by only picking universes that superficially look like much simpler universes.
Many would claim that these types of god and the supernatural don’t count—a lot of definitions of “supernatural” preclude things that can be reduced and follow natural laws even if those aren’t our natural laws.
If you cannot observably distinguish between these, then the difference how you represent them is kind of academical, isn’t it?