“Are the subsequent experiences of the copies “mine” relative to this self? If so, then it is certain that “I” will experience both drawing a red ball and drawing a blue ball, and the question seems meaningless. I feel that I may be missing a simple counter-example here.”
No. Assume you have already been copied and you know you are one of the software versions. (Some proof of this has been provided). What you don’t know is whether you are in a red ball simulation or a blue ball simulation. You do know that there are a lot of (identical—in the digital sense) red ball simulations and one blue ball simulation. My view on this is that you should presume yourself more likely to be in the red ball simulation.
Some people say that the probability is 50⁄50 because copies don’t count. I would make these points:
sensitivity, which you clearly know about.
it is hard to say where each program starts and ends. For example, we could say that the room with each with red ball simulation computer in it is a simulation of a room with a red ball simulation computer in it—in other words, the physical environment around the computer could validly be considered part of the program. It is trivial to argue that a physical system is a valid simulation of itself. As each computer is going to be in a slightly different physical environment, it could be argued that this means that all the programs are different, even if the digital representation put into the box by the humans is the same. The natural tendency of humans is just to to focus on the 1s and 0s—which is just a preferred interpretation.
Humans may say that each program is “digitally” the same but we might interpret the data slightly differently. For example, one program run may have a voltage of 11.964V in a certain switch at a certain time. Another program run may have a voltage of 11.985V to represent the same binary value. It could be argued that this makes them different programs, each of which is simulating a computer with an uploaded mind on it with different voltages in the switches (again, using the idea that a thing is also a computer simulation of that thing if we are going to start counting simulations).
I just think that when we try to go for 50⁄50 (copies don’t count) we can get into a huge mess that a lot of people can miss. While I don’t think you agree with me, I think maybe you can see this mess.
“While this is also a valid and interesting scenario to consider, I don’t think it “deals with the objection”. The idea that “which computer am I running on?” is a meaningful question for someone whose experiences have multiple encodings in an environment seems pretty central to the discussion.”
I think the suggested scenario makes it meaningful. There is also the issue of turning off some of the machines. If you know you are running on a billipn identical machines, and that 90% of them are about to be turned off then it could then become an important issue for you. It would make things very similar to what is regarded as “quantum suicide”.
We can also consider another situation:
You have a number of computers, all running the same program, and something in the external world is going to affect these computers, for example a visitor from the outside world will “login” and visit you—we could discuss the probability of meeting the visitor while the simulations are all identical.
“This is why I think I/O is important, because a mind may depend on a subjective environment to function. If this is the case, removal of the environment is basically removal of the mind.”
I don’t know if I fully understood that—are you suggesting that a reclusive AI or uploaded brain simulation would not exist as a conscious entity?
As you asked me about Permutation City (Greg Egan’s novel) before, I will elaborate on that a bit.
The “dust hypothesis” in Permutation City was the idea that all the bits of reality could be stuck together in different ways, to get different universe. The idea here is that every interpretation of an object, or part of an object, that can be made, in principle, by an interpretative algorithm, exists as an object in its own right. This argument applies it to minds, but I would clearly have to claim it applies to everything to avoid being some kind of weird dualist. It is therefore a somewhat more general view. Egan’s cosmology requires a universe to exist to get scrambled up in different ways. With a view like this, you don’t need to assume anything exists. While a lot of people would find this counter-intuitive, if you accept that interpretations that produce objects produce real objects, there is nothing stopping you producing an object by interpreting very little data, or no data at all. In this kind of view, even if you had nothing except logic, interpretation algorithms that could be applied in principle with no input—on nothing at all—would still describe objects, which this kind of cosmology would say would have to exist as abstractions of nothing. Further objects would exist that would be abstractions of these. In other words, if we take the view that every abstraction of any object physically exists as a definition of the idea of physical existence, it makes the existence of a physical reality mandatory.
“Of course, this leads to the problem of interpretation, which suggests to me that “information” and “algorithm” may be ill-defined concepts except in terms of one another. This is why I think I/O is important, because a mind may depend on a subjective environment to function.”
and I simply take universal realizability at face value. That is my response to this kind of issue. It frees me totally from any concerns about consistency—and the use of measure even makes things statistically predictable.
Assume you have already been copied and you know you are one of the software versions. (Some proof of this has been provided). What you don’t know is whether you are in a red ball simulation or a blue ball simulation. You do know that there are a lot of (identical—in the digital sense) red ball simulations and one blue ball simulation. My view on this is that you should presume yourself more likely to be in the red ball simulation.
Ah, this does more precisely address the issue. However, I don’t think it changes my inconclusive response. As my subjective experiences are still identical up until the ball is drawn, I don’t identify exclusively with either substrate and still anticipate a future where “I” experience both possibilities.
As each computer is going to be in a slightly different physical environment, it could be argued that this means that all the programs are different, even if the digital representation put into the box by the humans is the same.
If this is accepted, it seems to rule out the concept of identity altogether, except as excruciatingly defined over specific physical states, with no reliance on a more general principle.
The natural tendency of humans is just to to focus on the 1s and 0s—which is just a preferred interpretation.
Maybe sometimes, but not always. The digital interpretation can come into the picture if the mind in question is capable of observing a digital interpretation of its own substrate. This relies on the same sort of assumption as my previous example involving self-observability.
I just think that when we try to go for 50⁄50 (copies don’t count) we can get into a huge mess that a lot of people can miss. While I don’t think you agree with me, I think maybe you can see this mess.
I’m not sure if we’re thinking of the same mess. It seems to me the mess arises from the assumptions necessary to invoke probability, but I’m willing to be convinced of the validity of a probabilistic resolution.
If you know you are running on a billipn identical machines, and that 90% of them are about to be turned off then it could then become an important issue for you. It would make things very similar to what is regarded as “quantum suicide”.
They do seem similar. The major difference I see is that quantum suicide (or its dust analogue, Paul Durham running a lone copy and then shutting it down) produces near-certainty in the existence of an environment you once inhabited, but no longer do. Shutting down extra copies with identical subjective environments produces no similar outcome. The only difference it makes is that you can find fewer encodings of yourself in your environment.
The visitor scenario seems isomorphic to the red ball scenario. Both outcomes are guaranteed to occur.
I don’t know if I fully understood that—are you suggesting that a reclusive AI or uploaded brain simulation would not exist as a conscious entity?
No, I was pointing out the only example I could synthesize where substrate dependence made sense to me. A reclusive AI or isolated brain simulation by definition doesn’t have access to the environment containing its substrate, so I can’t see what substrate dependence even means for them.
In other words, if we take the view that every abstraction of any object physically exists as a definition of the idea of physical existence, it makes the existence of a physical reality mandatory.
I don’t think I followed this. Doesn’t any definition of the idea of physical existence mandate a physical reality?
I simply take universal realizability at face value. That is my response to this kind of issue. It frees me totally from any concerns about consistency—and the use of measure even makes things statistically predictable.
I still don’t see where you get statistics out of universal realizability. It seems to imply that observers require arbitrary information about a system in order to interpret that system as performing a computation, but if the observers themselves are defined to be computations, the “universality” is at least constrained by the requirement for correlation (information) between the two computations. I admit I find this pretty confusing, I’ll read your article on interpretation.
“Are the subsequent experiences of the copies “mine” relative to this self? If so, then it is certain that “I” will experience both drawing a red ball and drawing a blue ball, and the question seems meaningless. I feel that I may be missing a simple counter-example here.”
No. Assume you have already been copied and you know you are one of the software versions. (Some proof of this has been provided). What you don’t know is whether you are in a red ball simulation or a blue ball simulation. You do know that there are a lot of (identical—in the digital sense) red ball simulations and one blue ball simulation. My view on this is that you should presume yourself more likely to be in the red ball simulation.
Some people say that the probability is 50⁄50 because copies don’t count. I would make these points:
sensitivity, which you clearly know about.
it is hard to say where each program starts and ends. For example, we could say that the room with each with red ball simulation computer in it is a simulation of a room with a red ball simulation computer in it—in other words, the physical environment around the computer could validly be considered part of the program. It is trivial to argue that a physical system is a valid simulation of itself. As each computer is going to be in a slightly different physical environment, it could be argued that this means that all the programs are different, even if the digital representation put into the box by the humans is the same. The natural tendency of humans is just to to focus on the 1s and 0s—which is just a preferred interpretation.
Humans may say that each program is “digitally” the same but we might interpret the data slightly differently. For example, one program run may have a voltage of 11.964V in a certain switch at a certain time. Another program run may have a voltage of 11.985V to represent the same binary value. It could be argued that this makes them different programs, each of which is simulating a computer with an uploaded mind on it with different voltages in the switches (again, using the idea that a thing is also a computer simulation of that thing if we are going to start counting simulations).
I just think that when we try to go for 50⁄50 (copies don’t count) we can get into a huge mess that a lot of people can miss. While I don’t think you agree with me, I think maybe you can see this mess.
“While this is also a valid and interesting scenario to consider, I don’t think it “deals with the objection”. The idea that “which computer am I running on?” is a meaningful question for someone whose experiences have multiple encodings in an environment seems pretty central to the discussion.”
I think the suggested scenario makes it meaningful. There is also the issue of turning off some of the machines. If you know you are running on a billipn identical machines, and that 90% of them are about to be turned off then it could then become an important issue for you. It would make things very similar to what is regarded as “quantum suicide”.
We can also consider another situation:
You have a number of computers, all running the same program, and something in the external world is going to affect these computers, for example a visitor from the outside world will “login” and visit you—we could discuss the probability of meeting the visitor while the simulations are all identical.
“This is why I think I/O is important, because a mind may depend on a subjective environment to function. If this is the case, removal of the environment is basically removal of the mind.”
I don’t know if I fully understood that—are you suggesting that a reclusive AI or uploaded brain simulation would not exist as a conscious entity?
As you asked me about Permutation City (Greg Egan’s novel) before, I will elaborate on that a bit.
The “dust hypothesis” in Permutation City was the idea that all the bits of reality could be stuck together in different ways, to get different universe. The idea here is that every interpretation of an object, or part of an object, that can be made, in principle, by an interpretative algorithm, exists as an object in its own right. This argument applies it to minds, but I would clearly have to claim it applies to everything to avoid being some kind of weird dualist. It is therefore a somewhat more general view. Egan’s cosmology requires a universe to exist to get scrambled up in different ways. With a view like this, you don’t need to assume anything exists. While a lot of people would find this counter-intuitive, if you accept that interpretations that produce objects produce real objects, there is nothing stopping you producing an object by interpreting very little data, or no data at all. In this kind of view, even if you had nothing except logic, interpretation algorithms that could be applied in principle with no input—on nothing at all—would still describe objects, which this kind of cosmology would say would have to exist as abstractions of nothing. Further objects would exist that would be abstractions of these. In other words, if we take the view that every abstraction of any object physically exists as a definition of the idea of physical existence, it makes the existence of a physical reality mandatory.
“Of course, this leads to the problem of interpretation, which suggests to me that “information” and “algorithm” may be ill-defined concepts except in terms of one another. This is why I think I/O is important, because a mind may depend on a subjective environment to function.”
and I simply take universal realizability at face value. That is my response to this kind of issue. It frees me totally from any concerns about consistency—and the use of measure even makes things statistically predictable.
Ah, this does more precisely address the issue. However, I don’t think it changes my inconclusive response. As my subjective experiences are still identical up until the ball is drawn, I don’t identify exclusively with either substrate and still anticipate a future where “I” experience both possibilities.
If this is accepted, it seems to rule out the concept of identity altogether, except as excruciatingly defined over specific physical states, with no reliance on a more general principle.
Maybe sometimes, but not always. The digital interpretation can come into the picture if the mind in question is capable of observing a digital interpretation of its own substrate. This relies on the same sort of assumption as my previous example involving self-observability.
I’m not sure if we’re thinking of the same mess. It seems to me the mess arises from the assumptions necessary to invoke probability, but I’m willing to be convinced of the validity of a probabilistic resolution.
They do seem similar. The major difference I see is that quantum suicide (or its dust analogue, Paul Durham running a lone copy and then shutting it down) produces near-certainty in the existence of an environment you once inhabited, but no longer do. Shutting down extra copies with identical subjective environments produces no similar outcome. The only difference it makes is that you can find fewer encodings of yourself in your environment.
The visitor scenario seems isomorphic to the red ball scenario. Both outcomes are guaranteed to occur.
No, I was pointing out the only example I could synthesize where substrate dependence made sense to me. A reclusive AI or isolated brain simulation by definition doesn’t have access to the environment containing its substrate, so I can’t see what substrate dependence even means for them.
I don’t think I followed this. Doesn’t any definition of the idea of physical existence mandate a physical reality?
I still don’t see where you get statistics out of universal realizability. It seems to imply that observers require arbitrary information about a system in order to interpret that system as performing a computation, but if the observers themselves are defined to be computations, the “universality” is at least constrained by the requirement for correlation (information) between the two computations. I admit I find this pretty confusing, I’ll read your article on interpretation.