How is that different than “I believe that I am a simulation with non-negligible probability”?
If the same computation is being run in so-called ‘basement reality’ and run on a simulator’s computer, you’re in both places; it’s meaningless to talk about the probability of being in one or the other. But you can talk about the relative number of computations of you that are in ‘basement reality’ versus on simulators’ computers.
This also breaks down when you start reasoning decision theoretically, but most LW people don’t do that, so I’m not too worried about it.
In a dovetailed ensemble universe, it doesn’t even really make sense to talk about any ‘basement’ reality, since the UTM computing the ensemble eventually computes itself, ad infinitum. So instead you start reasoning about ‘basement’ as computations that are the product of e.g. cosmological/natural selection-type optimization processes versus the product of agent-type optimization processes (like humans or AGIs).
The only reason you’d expect there to be humans in the first place is if they appeared in ‘basement’ level reality, and in a universal dovetailer computing via complexity, there’s then a strong burden of proof on those who wish to postulate the extra complexity of all those non-basement agent-optimized Earths. Nonetheless I feel like I can bear the burden of proof quite well if I throw a few other disjunctions in. (As stated, it’s meaningless decision theoretically, but meaningful if we’re just talking about the structure of the ensemble from a naive human perspective.)
If the same computation is being run in so-called ‘basement reality’ and run on a simulator’s computer, you’re in both places; it’s meaningless to talk about the probability of being in one or the other.
Why meaningless? It seems I can talk about one copy of me being here, now, and one copy of myself being off in the future in a simulation. Perhaps I do not know which one I am, but I don’t think I am saying something meaningless to assert that I (this copy of me that you hear speaking) am the one in basement reality, and hence that no one in any reality knows in advance that I am about to close this sentence with a hash mark#
I’m not asking you to bear the burden of proving that non-basement versions are numerous. I’m asking you to justify your claim that when I use the word “I” in this universe, it is meaningless to say that I’m not talking about the fellow saying “I” in a simulation and that he is not talking (in part) about me. Surely “I” can be interpreted to mean the local instance.
Both copies will do exactly the same thing, right down to their thoughts, right? So to them, what does it matter which one they are? It isn’t just that given that they have no way to test, this means they’ll never know, it’s more fundamental than that. It’s kinda like how if there’s an invisible, immaterial dragon in your garage, there might as well not be a dragon there at all, right? If there’s no way, even in principle, to tell the difference between the two states, there might as well not be any difference at all.
I must be missing a subtlety here. I began by asking “Is saying X different from saying Y?” I seem to be getting the answer “Yes, they are different. X is meaningless because it can’t be distinguished from Y.”
Ah, I think I see your problem. You insist on seeing the universe from the perspective of the computer running the program—and in this case, we can say “yes, in memory position #31415926 there’s a human in basement reality and in memory position #2718281828 there’s an identical human in a deeper simulation”. However, those humans can’t tell that. They have no way of determining which is true of them, even if they know that there is a computer that could point to them in its memory, because they are identical. You are every (sufficiently) identical copy of yourself.
No, you don’t see the problem. The problem is that Will_Newsome began by stating:
We are living in a simulation… Almost certain. >99.5%.
Which is fine. But now I am being told that my counter claim “I am not living in a simulation” is meaningless. Meaningless because I can’t prove my statement empirically.
What we seem to have here is very similar to Godel’s version of St. Anselm’s “ontological” proof of the existence of a simulation (i.e. God).
Oh. Did you see my comment asking him to tell whether he meant “some of our measure is in a simulation” or “this particular me is in a simulation”? The first question is asking whether or not we believe that the computer exists (ie, if we were looking at the computer-that-runs-reality could we notice that some copies of us are in simulations or not) and the second is the one I have been arguing is meaningless (kinda).
Right; I thought the intuitive gap here was only about ensemble universes, but it also seems that there’s an intuitive gap that needs to be filled with UDT-like reasoning, where all of your decisions are for also decisions for agents sufficiently like you in the relevant sense (which differs for every decision).
In a dovetailed ensemble universe, it doesn’t even really make sense to talk about any ‘basement’ reality, since the UTM computing the ensemble eventually computes itself, ad infinitum.
I don’t get this. Consider the following ordering of programs; T’ < T iff T can simulate T’. More precisely:
T’ < T iff for each x’ there exists an x such that T’(x’) = T(x)
It’s not immediately clear to me that this ordering shouldn’t have any least elements. If it did, such elements could be thought of as basements. I don’t have any idea about whether or not we could be part of such a basement computation.
I still think your distinction between products of cosmological-type optimization processes and agent-type optimization processes is important though.
If the same computation is being run in so-called ‘basement reality’ and run on a simulator’s computer, you’re in both places; it’s meaningless to talk about the probability of being in one or the other. But you can talk about the relative number of computations of you that are in ‘basement reality’ versus on simulators’ computers.
This also breaks down when you start reasoning decision theoretically, but most LW people don’t do that, so I’m not too worried about it.
In a dovetailed ensemble universe, it doesn’t even really make sense to talk about any ‘basement’ reality, since the UTM computing the ensemble eventually computes itself, ad infinitum. So instead you start reasoning about ‘basement’ as computations that are the product of e.g. cosmological/natural selection-type optimization processes versus the product of agent-type optimization processes (like humans or AGIs).
The only reason you’d expect there to be humans in the first place is if they appeared in ‘basement’ level reality, and in a universal dovetailer computing via complexity, there’s then a strong burden of proof on those who wish to postulate the extra complexity of all those non-basement agent-optimized Earths. Nonetheless I feel like I can bear the burden of proof quite well if I throw a few other disjunctions in. (As stated, it’s meaningless decision theoretically, but meaningful if we’re just talking about the structure of the ensemble from a naive human perspective.)
Why meaningless? It seems I can talk about one copy of me being here, now, and one copy of myself being off in the future in a simulation. Perhaps I do not know which one I am, but I don’t think I am saying something meaningless to assert that I (this copy of me that you hear speaking) am the one in basement reality, and hence that no one in any reality knows in advance that I am about to close this sentence with a hash mark#
I’m not asking you to bear the burden of proving that non-basement versions are numerous. I’m asking you to justify your claim that when I use the word “I” in this universe, it is meaningless to say that I’m not talking about the fellow saying “I” in a simulation and that he is not talking (in part) about me. Surely “I” can be interpreted to mean the local instance.
Both copies will do exactly the same thing, right down to their thoughts, right? So to them, what does it matter which one they are? It isn’t just that given that they have no way to test, this means they’ll never know, it’s more fundamental than that. It’s kinda like how if there’s an invisible, immaterial dragon in your garage, there might as well not be a dragon there at all, right? If there’s no way, even in principle, to tell the difference between the two states, there might as well not be any difference at all.
I must be missing a subtlety here. I began by asking “Is saying X different from saying Y?” I seem to be getting the answer “Yes, they are different. X is meaningless because it can’t be distinguished from Y.”
Ah, I think I see your problem. You insist on seeing the universe from the perspective of the computer running the program—and in this case, we can say “yes, in memory position #31415926 there’s a human in basement reality and in memory position #2718281828 there’s an identical human in a deeper simulation”. However, those humans can’t tell that. They have no way of determining which is true of them, even if they know that there is a computer that could point to them in its memory, because they are identical. You are every (sufficiently) identical copy of yourself.
No, you don’t see the problem. The problem is that Will_Newsome began by stating:
Which is fine. But now I am being told that my counter claim “I am not living in a simulation” is meaningless. Meaningless because I can’t prove my statement empirically.
What we seem to have here is very similar to Godel’s version of St. Anselm’s “ontological” proof of the existence of a simulation (i.e. God).
Oh. Did you see my comment asking him to tell whether he meant “some of our measure is in a simulation” or “this particular me is in a simulation”? The first question is asking whether or not we believe that the computer exists (ie, if we were looking at the computer-that-runs-reality could we notice that some copies of us are in simulations or not) and the second is the one I have been arguing is meaningless (kinda).
Right; I thought the intuitive gap here was only about ensemble universes, but it also seems that there’s an intuitive gap that needs to be filled with UDT-like reasoning, where all of your decisions are for also decisions for agents sufficiently like you in the relevant sense (which differs for every decision).
I don’t get this. Consider the following ordering of programs; T’ < T iff T can simulate T’. More precisely:
T’ < T iff for each x’ there exists an x such that T’(x’) = T(x)
It’s not immediately clear to me that this ordering shouldn’t have any least elements. If it did, such elements could be thought of as basements. I don’t have any idea about whether or not we could be part of such a basement computation.
I still think your distinction between products of cosmological-type optimization processes and agent-type optimization processes is important though.