I’m saying that the simulation argument is wrong because it follows from a mistaken epistemic framework (SIA). Once you switch to the correct epistemic framework (UDT) the argument dissolves.
Roughly speaking, UDT says you show make decisions as if you decide for all of your copies. So, if there are copies of you inside and outside simulations, you should take all of them into account. Now, if all the copies inside simulations are located in the far future wrt the copy outside simulations (e.g. because those copies were created by a post-human civilization) you can usually disregard them because of the temporal discount in the utility function. On the other hand, you can consider the possibility that all copies are inside simulation. But, once you go the the Tegmark IV multiverse it is not a real possibility since you can always imagine a universe in which you are not inside a simulation. The only question is the relative weight (“magic reality fluid”) of this universe. Since weight is 2^{-Kolmogorov complexity}, if the simplest hypothesis explaining your universe doesn’t involve embedding it in a simulation, you should act as if you’re not in a simulation. If the simplest hypothesis explaining your universe does involve embedding it in a simulation (e.g. because the Creator just spoke to you yesterday), you should behave as if you’re in a simulation. So Egan’s law is intact.
But going back one step: Would you agree that my argument is valid in the ‘wrong’ framework I used?
I think Occam’s razor still applies even if we are in a simulation. It’s just more difficult to apply. It would probably involve something like trying to guess the motivation of the Creators and update on that.
I’m saying that the simulation argument is wrong because it follows from a mistaken epistemic framework (SIA). Once you switch to the correct epistemic framework (UDT) the argument dissolves.
You might have indicated that you want to apply another framework than implied by my reference to the simulation argument.
I might agree with yours reasoning, but need more input on this:
Can you give me a ref for this? I don’t see how it ovbiously follows.
But going back one step: Would you agree that my argument is valid in the ‘wrong’ framework I used?
The best ref I could find is this
Roughly speaking, UDT says you show make decisions as if you decide for all of your copies. So, if there are copies of you inside and outside simulations, you should take all of them into account. Now, if all the copies inside simulations are located in the far future wrt the copy outside simulations (e.g. because those copies were created by a post-human civilization) you can usually disregard them because of the temporal discount in the utility function. On the other hand, you can consider the possibility that all copies are inside simulation. But, once you go the the Tegmark IV multiverse it is not a real possibility since you can always imagine a universe in which you are not inside a simulation. The only question is the relative weight (“magic reality fluid”) of this universe. Since weight is 2^{-Kolmogorov complexity}, if the simplest hypothesis explaining your universe doesn’t involve embedding it in a simulation, you should act as if you’re not in a simulation. If the simplest hypothesis explaining your universe does involve embedding it in a simulation (e.g. because the Creator just spoke to you yesterday), you should behave as if you’re in a simulation. So Egan’s law is intact.
I think Occam’s razor still applies even if we are in a simulation. It’s just more difficult to apply. It would probably involve something like trying to guess the motivation of the Creators and update on that.
That thread in inconclusive. It basically urges for an explanatory post too. But thanks for giving it.
As that is building on the conclusion I take it to mean that you basically agree.
What does follow from this result for e.g. acting depends on lots of factors I don’t want to discuss further on this thread.
Tag out.