If the same computation is being run in so-called ‘basement reality’ and run on a simulator’s computer, you’re in both places; it’s meaningless to talk about the probability of being in one or the other.
Why meaningless? It seems I can talk about one copy of me being here, now, and one copy of myself being off in the future in a simulation. Perhaps I do not know which one I am, but I don’t think I am saying something meaningless to assert that I (this copy of me that you hear speaking) am the one in basement reality, and hence that no one in any reality knows in advance that I am about to close this sentence with a hash mark#
I’m not asking you to bear the burden of proving that non-basement versions are numerous. I’m asking you to justify your claim that when I use the word “I” in this universe, it is meaningless to say that I’m not talking about the fellow saying “I” in a simulation and that he is not talking (in part) about me. Surely “I” can be interpreted to mean the local instance.
Both copies will do exactly the same thing, right down to their thoughts, right? So to them, what does it matter which one they are? It isn’t just that given that they have no way to test, this means they’ll never know, it’s more fundamental than that. It’s kinda like how if there’s an invisible, immaterial dragon in your garage, there might as well not be a dragon there at all, right? If there’s no way, even in principle, to tell the difference between the two states, there might as well not be any difference at all.
I must be missing a subtlety here. I began by asking “Is saying X different from saying Y?” I seem to be getting the answer “Yes, they are different. X is meaningless because it can’t be distinguished from Y.”
Ah, I think I see your problem. You insist on seeing the universe from the perspective of the computer running the program—and in this case, we can say “yes, in memory position #31415926 there’s a human in basement reality and in memory position #2718281828 there’s an identical human in a deeper simulation”. However, those humans can’t tell that. They have no way of determining which is true of them, even if they know that there is a computer that could point to them in its memory, because they are identical. You are every (sufficiently) identical copy of yourself.
No, you don’t see the problem. The problem is that Will_Newsome began by stating:
We are living in a simulation… Almost certain. >99.5%.
Which is fine. But now I am being told that my counter claim “I am not living in a simulation” is meaningless. Meaningless because I can’t prove my statement empirically.
What we seem to have here is very similar to Godel’s version of St. Anselm’s “ontological” proof of the existence of a simulation (i.e. God).
Oh. Did you see my comment asking him to tell whether he meant “some of our measure is in a simulation” or “this particular me is in a simulation”? The first question is asking whether or not we believe that the computer exists (ie, if we were looking at the computer-that-runs-reality could we notice that some copies of us are in simulations or not) and the second is the one I have been arguing is meaningless (kinda).
Right; I thought the intuitive gap here was only about ensemble universes, but it also seems that there’s an intuitive gap that needs to be filled with UDT-like reasoning, where all of your decisions are for also decisions for agents sufficiently like you in the relevant sense (which differs for every decision).
Why meaningless? It seems I can talk about one copy of me being here, now, and one copy of myself being off in the future in a simulation. Perhaps I do not know which one I am, but I don’t think I am saying something meaningless to assert that I (this copy of me that you hear speaking) am the one in basement reality, and hence that no one in any reality knows in advance that I am about to close this sentence with a hash mark#
I’m not asking you to bear the burden of proving that non-basement versions are numerous. I’m asking you to justify your claim that when I use the word “I” in this universe, it is meaningless to say that I’m not talking about the fellow saying “I” in a simulation and that he is not talking (in part) about me. Surely “I” can be interpreted to mean the local instance.
Both copies will do exactly the same thing, right down to their thoughts, right? So to them, what does it matter which one they are? It isn’t just that given that they have no way to test, this means they’ll never know, it’s more fundamental than that. It’s kinda like how if there’s an invisible, immaterial dragon in your garage, there might as well not be a dragon there at all, right? If there’s no way, even in principle, to tell the difference between the two states, there might as well not be any difference at all.
I must be missing a subtlety here. I began by asking “Is saying X different from saying Y?” I seem to be getting the answer “Yes, they are different. X is meaningless because it can’t be distinguished from Y.”
Ah, I think I see your problem. You insist on seeing the universe from the perspective of the computer running the program—and in this case, we can say “yes, in memory position #31415926 there’s a human in basement reality and in memory position #2718281828 there’s an identical human in a deeper simulation”. However, those humans can’t tell that. They have no way of determining which is true of them, even if they know that there is a computer that could point to them in its memory, because they are identical. You are every (sufficiently) identical copy of yourself.
No, you don’t see the problem. The problem is that Will_Newsome began by stating:
Which is fine. But now I am being told that my counter claim “I am not living in a simulation” is meaningless. Meaningless because I can’t prove my statement empirically.
What we seem to have here is very similar to Godel’s version of St. Anselm’s “ontological” proof of the existence of a simulation (i.e. God).
Oh. Did you see my comment asking him to tell whether he meant “some of our measure is in a simulation” or “this particular me is in a simulation”? The first question is asking whether or not we believe that the computer exists (ie, if we were looking at the computer-that-runs-reality could we notice that some copies of us are in simulations or not) and the second is the one I have been arguing is meaningless (kinda).
Right; I thought the intuitive gap here was only about ensemble universes, but it also seems that there’s an intuitive gap that needs to be filled with UDT-like reasoning, where all of your decisions are for also decisions for agents sufficiently like you in the relevant sense (which differs for every decision).