As the author of this article, I will reply to this, though it is hard to make much of a reply here, though. (I actually got here our of curiosity when I saw the site logs). I am, however, always pleased to discuss issues like this with people. One issue with this reply is that it is not just randomness we have to worry about. If we are basing a computational interpretation on randomness, yes, we may need to make the computational interpretation progressively more extreme, but Searle’s famous WordStar running in a wall example is just one example. We may not even have the computational interpretation based on randomness: it could conceivably be based on structure in something else, even though that structure would not be considered to be running the computer program except under a very forced interpretation. Where would we draw the line? Another point - why should it matter if we use a progressively more extreme interpretation? We might, for example, just want to say that a computation ran for 10 seconds, which relies on a fixed intertreptation (if a complex one), and what happens after that may not interest us. Where would we draw the line? Another issue is that the main argument had been about statistical issues with combining computers when considering probability issues—the whole thing had not been based on Searle—who would not take me any more seriously by the way.
We may not even have the computational interpretation based on randomness: it could conceivably be based on structure in something else, even though that structure would not be considered to be running the computer program except under a very forced interpretation. Where would we draw the line?
We would draw the line where our good old friend mutual information comes in. If learning the results of the other phenomenon tells you something about the results of the algorithm you want to run, then there is mutual information, and the phenomenon counts as a (partial) implementation of the algorithm.
This is an approach I considered back in 1990something actually, and at the time I actually considered it correct. I get the idea. We say that the “finding algorithm” somehow detracts from what is running. The problem is, this does not leave a clearly defined algorithm as the one being found. if X is found by F, you might say that all that runs is a “partial version of X” and that X only exists when found by F. This, however, would not just apply to deeply hidden algorithms. I could equally well apply it your brain. I would have to run some sort of algorithm, F, on your brain to work out that some algorithm corresponding to you, X, is running. Clearly, that would be nothing like as severe as the extreme situations discussed in that article, but what does it mean for your status? Does it mean that the X corresponding to you does not exist? Are you “not all there” in some sense?
Here is a thought experiment:
A mind running in a VR system (suppose the two are one software package to make this easier) gradually encrypts itself. By this I mean that it goes through a series of steps, each intended to make it slightly more difficult to realize that the mind is there. There is no end to this. When does the mind cease to exist? When it is so hard to find that you would need a program as long as the one being hidden to find it? I say that is arbitrary.
You suggest that maybe the program running the mind just exists “partially” in some way, which I fully understand. What would the experience be like for the mind as the encryption gets more and more extreme? I say this causes issues, which are readily resolved if we simply say that the mind’s measure decreases.
I can also add a statistical issue to this, which I have not written up yet. (I have a lot to add on this subject. It may be obvious that I need to argue that this applies to everything, and not just minds, to avoid some weird kind of dualism.).
Suppose we have two simulations of you, running in VRs. One is about to look in a box and see a red ball. The other will see a blue ball. We subject the version that will see the blue ball to some process that makes it slightly harder to find. You don’t know which version you are. How much will you expect to see a blue ball when you look in the box? Do you say it is 50⁄50 that you will see a red ball or a blue ball? We keep increasing the “encryption” a bit each time I ask the question. If your idea that somehow the mind is only “partial” by needing the finding algorithm to find it is right, I suggest we end up with statistical incoherency. We can only say that the probability is 50⁄50 when the situations are exactly the same, but that will never be the case in any real situation. For any situation, one mind will need a bit more finding than the other.
In other words, if you think the length of the finding algorithm makes the algorithm running a mind somehow “partial”, in a statistical question in which you had two possibilities, one in which your mind was harder to find than the other, and you don’t know which situation you are in, when would you eliminate the “partial” mind as a possibility? If you say, “Never. As the encryption increases I would just say I am less and less likely to be in that situation” you have effectively agreed with me by adopting an approach where each mind is as valid as the other (you accept either as a candidate for your situation but treat them differently with regard to statistics—which is what I do). If you say that one mind cannot be a candidate for your situation then you have the issue of cut-off point. What cut-off point? When would you say, “This mind is real. This mind is only partial so cannot be a candidate for my experience. Therefore, I am the first mind?”
I would point out that I do not ignore these issues. I address them by using measure. I take the view that a mind which takes more finding exists with less measure, because a smaller proportion of the set of all possible algorithms that could be used to find something like it will find something like it.
Finally, this only deals with one issue. There is also the issue of combining computers in the statistical thought experiments that I mentioned in the first article of that series. My intention in that series is to try to show that these various issues demand that we take a particular view about minds and reality to maintain statistical coherency.
When does the mind cease to exist? [...] I take the view that a mind which takes more finding exists with less measure, because a smaller proportion of the set of all possible algorithms that could be used to find something like it will find something like it.
I’m running into trouble with the concept of “existence” as it’s being applied here. Surely existence of abstract information and processes must be relative to a chosen reference frame? The “possible algorithms” need to be specified relative to a chosen data set and initial condition, like “observable physical properties of Searle’s wall given sufficient locality”. Clearly an observer outside of our light cone couldn’t discern anything about the wall, regardless of algorithm.
An encrypted mind “existing less” doesn’t seem to carry any subjective consequences for the mind itself. What if a mind encrypts itself but shares the key with a few others? Wouldn’t its “existence” depend on whether or not the reference frame has access to the key?
If you’ve read it, I’m curious to know what you think of the “dust hypothesis” from Egan’s Permutation City in this context.
“Less measure” is only meant to be of significance statistically, not subjectively. For example, if you could exist in one of two ways, one with measure X and one with measure of 0.001X, I would say you should think it more likely you are in the first situation. In other words, I am agreeing (if you are arguing for this) that there should be no subjective difference for the mind in the extreme situation. I just think we should think that that situation corresponds to “less” observers in some way.
My own argument is actually a justification of something a bit like the dust hypothesis in “Permutation City”. However, there are some significant differences, so that analogy should not apply too much. I would say that the characters in Greg Egan’s novel undergo a huge decrease in measure, which could cause philosophical issues—though it would not feel different after it had happened to you.
I think we should consider this in terms of measure because there are “more ways to find you” in some situations than in others. It is almost like you have more minds in one situation than another—though there are no absolute numbers and really it should be considered in terms of density. If you want to see why I think measure is important, this first article may help: http://www.paul-almond.com/Substrate1.htm.
For example, if you could exist in one of two ways, one with measure X and one with measure of 0.001X, I would say you should think it more likely you are in the first situation. [...] I just think we should think that that situation corresponds to “less” observers in some way.
This seems tautological to me. Your measure needs to be defined relative to a given set of observers.
I think we should consider this in terms of measure because there are “more ways to find you” in some situations than in others.
More ways for who to find you?
If you want to see why I think measure is important, this first article may help
Very interesting piece. I’ll be thinking about the Mars colony scenario for a while. I do have a couple of immediate responses.
How likely is it that you are in Computer A, B or C?
As long as the simulations are identical and interact identically (from the simulation’s point of view) with the external world, I don’t think the above question is meaningful. A mind doesn’t have a geographical location, only implementations of it embedded in a coordinate space do. So A, B, and C are not disjoint possibilities, which means probability mass isn’t split between them.
The more redundancy in a particular implementation of a version you, then the more likely it is that that implementation is causing your experiences.
I see this the other way around. The more redundancy in a particular implementation, the more encodings of your own experiences you will expect to find embedded within your accessible reality, assuming you have causal access to the implementation-space. If you are causally disconnected from your implementation (e.g., run on hypothetical tamper-proof hardware without access to I/O), do you exist with measure zero? If you share your virtual environment with millions of other simulated minds with whom you can interact, do they all still exist with measure zero?
“As long as the simulations are identical and interact identically (from the simulation’s point of view) with the external world, I don’t think the above question is meaningful. A mind doesn’t have a geographical location, only implementations of it embedded in a coordinate space do. So A, B, and C are not disjoint possibilities, which means probability mass isn’t split between them.”
I dealt with this objection in the second article of the series. It would be easy to say that there are two simulations, in which slightly different things are going to happen. For example, we could have one simulation in which you are going to see a red ball when you open a box and one where you are going to see a blue ball. We could have lots of computers running the red ball situation and then combine them and discuss how this affects probability (if at all).
“The more redundancy in a particular implementation of a version you, then the more likely it is that that implementation is causing your experiences.”
Does this mean that if we had a billion identical simulations of you in a VR where you were about to see a red ball and one (different) simulation of you in a VR where you are about to see a blue ball, and all these were running on separate computers, and you did not know which situation you were in, you would not think it more likely you were going to see a red ball? (and I know a common answer here is that it is still 50⁄50 - that copies don’t count - which I can answer if you say that and which is addressed in the second article - I am just curious what you would say about that.)
″ see this the other way around. The more redundancy in a particular implementation, the more encodings of your own experiences you will expect to find embedded within your accessible reality, assuming you have causal access to the implementation-space. If you are causally disconnected from your implementation (e.g., run on hypothetical tamper-proof hardware without access to I/O), do you exist with measure zero? If you share your virtual environment with millions of other simulated minds with whom you can interact, do they all still exist with measure zero?”
I am not making any suggestion that there is any connection between measure, redundancy and whether or not you are connected to I/O. Whether you are connected to I/O does not interest me much. However, some particularly low measure situations may be hard to connect to I/O if they are associated with very extreme interpretations.
I dealt with this objection in the second article of the series. It would be easy to say that there are two simulations, in which slightly different things are going to happen.
While this is also a valid and interesting scenario to consider, I don’t think it “deals with the objection”. The idea that “which computer am I running on?” is a meaningful question for someone whose experiences have multiple encodings in an environment seems pretty central to the discussion.
Does this mean that if we had a billion identical simulations of you in a VR where you were about to see a red ball and one (different) simulation of you in a VR where you are about to see a blue ball, and all these were running on separate computers, and you did not know which situation you were in, you would not think it more likely you were going to see a red ball?
I actually don’t have a good answer to this, and the flavor of my confusion leads me to suspect the definitions involved. I think the word “you” in this context denotes something of an unnatural category. To consider the question of anticipating different experiences, I have to assume a specific self exists prior to copying. Are the subsequent experiences of the copies “mine” relative to this self? If so, then it is certain that “I” will experience both drawing a red ball and drawing a blue ball, and the question seems meaningless. I feel that I may be missing a simple counter-example here.
I know a common answer here is that it is still 50⁄50 - that copies don’t count—which I can answer if you say that and which is addressed in the second article
50⁄50 makes sense to me only as far it represents a default state of belief about a pair of mutually exclusive possibilities in the absence of any relevant information, but the exclusivity troubles me. I read objection 9, and I’m not bothered by the “strange” conclusion of sensitivity to minor alterations (perhaps this leads to contradictions elsewhere that I haven’t perceived?). I agree that counting algorithms is just a dressed-up version of counting machines, because the entire question is predicated on the algorithms being subjectively isomorphic (they’re only different in that some underlying physical or virtual machine is behaving differently to encode the same experience).
Of course, this leads to the problem of interpretation, which suggests to me that “information” and “algorithm” may be ill-defined concepts except in terms of one another. This is why I think I/O is important, because a mind may depend on a subjective environment to function. If this is the case, removal of the environment is basically removal of the mind. A mind of this sort, subjectively dependent on its own substrate, can be “destroyed” relative to observers of the environment, as they now have evidence for the following reasoning:
Mind M cannot logically exist except as self-observably embedded in environment E. So if E lacks such an encoding, M cannot exist.
I have observed E, and have sound reasons (local to E) to doubt the existence of a suitable encoding of M.
Therefore, M does not exist.
So far, this is the only substrate dependence argument I find convincing, but it requires the explicit dependence of M on E, which requires I/O.
“Are the subsequent experiences of the copies “mine” relative to this self? If so, then it is certain that “I” will experience both drawing a red ball and drawing a blue ball, and the question seems meaningless. I feel that I may be missing a simple counter-example here.”
No. Assume you have already been copied and you know you are one of the software versions. (Some proof of this has been provided). What you don’t know is whether you are in a red ball simulation or a blue ball simulation. You do know that there are a lot of (identical—in the digital sense) red ball simulations and one blue ball simulation. My view on this is that you should presume yourself more likely to be in the red ball simulation.
Some people say that the probability is 50⁄50 because copies don’t count. I would make these points:
sensitivity, which you clearly know about.
it is hard to say where each program starts and ends. For example, we could say that the room with each with red ball simulation computer in it is a simulation of a room with a red ball simulation computer in it—in other words, the physical environment around the computer could validly be considered part of the program. It is trivial to argue that a physical system is a valid simulation of itself. As each computer is going to be in a slightly different physical environment, it could be argued that this means that all the programs are different, even if the digital representation put into the box by the humans is the same. The natural tendency of humans is just to to focus on the 1s and 0s—which is just a preferred interpretation.
Humans may say that each program is “digitally” the same but we might interpret the data slightly differently. For example, one program run may have a voltage of 11.964V in a certain switch at a certain time. Another program run may have a voltage of 11.985V to represent the same binary value. It could be argued that this makes them different programs, each of which is simulating a computer with an uploaded mind on it with different voltages in the switches (again, using the idea that a thing is also a computer simulation of that thing if we are going to start counting simulations).
I just think that when we try to go for 50⁄50 (copies don’t count) we can get into a huge mess that a lot of people can miss. While I don’t think you agree with me, I think maybe you can see this mess.
“While this is also a valid and interesting scenario to consider, I don’t think it “deals with the objection”. The idea that “which computer am I running on?” is a meaningful question for someone whose experiences have multiple encodings in an environment seems pretty central to the discussion.”
I think the suggested scenario makes it meaningful. There is also the issue of turning off some of the machines. If you know you are running on a billipn identical machines, and that 90% of them are about to be turned off then it could then become an important issue for you. It would make things very similar to what is regarded as “quantum suicide”.
We can also consider another situation:
You have a number of computers, all running the same program, and something in the external world is going to affect these computers, for example a visitor from the outside world will “login” and visit you—we could discuss the probability of meeting the visitor while the simulations are all identical.
“This is why I think I/O is important, because a mind may depend on a subjective environment to function. If this is the case, removal of the environment is basically removal of the mind.”
I don’t know if I fully understood that—are you suggesting that a reclusive AI or uploaded brain simulation would not exist as a conscious entity?
As you asked me about Permutation City (Greg Egan’s novel) before, I will elaborate on that a bit.
The “dust hypothesis” in Permutation City was the idea that all the bits of reality could be stuck together in different ways, to get different universe. The idea here is that every interpretation of an object, or part of an object, that can be made, in principle, by an interpretative algorithm, exists as an object in its own right. This argument applies it to minds, but I would clearly have to claim it applies to everything to avoid being some kind of weird dualist. It is therefore a somewhat more general view. Egan’s cosmology requires a universe to exist to get scrambled up in different ways. With a view like this, you don’t need to assume anything exists. While a lot of people would find this counter-intuitive, if you accept that interpretations that produce objects produce real objects, there is nothing stopping you producing an object by interpreting very little data, or no data at all. In this kind of view, even if you had nothing except logic, interpretation algorithms that could be applied in principle with no input—on nothing at all—would still describe objects, which this kind of cosmology would say would have to exist as abstractions of nothing. Further objects would exist that would be abstractions of these. In other words, if we take the view that every abstraction of any object physically exists as a definition of the idea of physical existence, it makes the existence of a physical reality mandatory.
“Of course, this leads to the problem of interpretation, which suggests to me that “information” and “algorithm” may be ill-defined concepts except in terms of one another. This is why I think I/O is important, because a mind may depend on a subjective environment to function.”
and I simply take universal realizability at face value. That is my response to this kind of issue. It frees me totally from any concerns about consistency—and the use of measure even makes things statistically predictable.
Assume you have already been copied and you know you are one of the software versions. (Some proof of this has been provided). What you don’t know is whether you are in a red ball simulation or a blue ball simulation. You do know that there are a lot of (identical—in the digital sense) red ball simulations and one blue ball simulation. My view on this is that you should presume yourself more likely to be in the red ball simulation.
Ah, this does more precisely address the issue. However, I don’t think it changes my inconclusive response. As my subjective experiences are still identical up until the ball is drawn, I don’t identify exclusively with either substrate and still anticipate a future where “I” experience both possibilities.
As each computer is going to be in a slightly different physical environment, it could be argued that this means that all the programs are different, even if the digital representation put into the box by the humans is the same.
If this is accepted, it seems to rule out the concept of identity altogether, except as excruciatingly defined over specific physical states, with no reliance on a more general principle.
The natural tendency of humans is just to to focus on the 1s and 0s—which is just a preferred interpretation.
Maybe sometimes, but not always. The digital interpretation can come into the picture if the mind in question is capable of observing a digital interpretation of its own substrate. This relies on the same sort of assumption as my previous example involving self-observability.
I just think that when we try to go for 50⁄50 (copies don’t count) we can get into a huge mess that a lot of people can miss. While I don’t think you agree with me, I think maybe you can see this mess.
I’m not sure if we’re thinking of the same mess. It seems to me the mess arises from the assumptions necessary to invoke probability, but I’m willing to be convinced of the validity of a probabilistic resolution.
If you know you are running on a billipn identical machines, and that 90% of them are about to be turned off then it could then become an important issue for you. It would make things very similar to what is regarded as “quantum suicide”.
They do seem similar. The major difference I see is that quantum suicide (or its dust analogue, Paul Durham running a lone copy and then shutting it down) produces near-certainty in the existence of an environment you once inhabited, but no longer do. Shutting down extra copies with identical subjective environments produces no similar outcome. The only difference it makes is that you can find fewer encodings of yourself in your environment.
The visitor scenario seems isomorphic to the red ball scenario. Both outcomes are guaranteed to occur.
I don’t know if I fully understood that—are you suggesting that a reclusive AI or uploaded brain simulation would not exist as a conscious entity?
No, I was pointing out the only example I could synthesize where substrate dependence made sense to me. A reclusive AI or isolated brain simulation by definition doesn’t have access to the environment containing its substrate, so I can’t see what substrate dependence even means for them.
In other words, if we take the view that every abstraction of any object physically exists as a definition of the idea of physical existence, it makes the existence of a physical reality mandatory.
I don’t think I followed this. Doesn’t any definition of the idea of physical existence mandate a physical reality?
I simply take universal realizability at face value. That is my response to this kind of issue. It frees me totally from any concerns about consistency—and the use of measure even makes things statistically predictable.
I still don’t see where you get statistics out of universal realizability. It seems to imply that observers require arbitrary information about a system in order to interpret that system as performing a computation, but if the observers themselves are defined to be computations, the “universality” is at least constrained by the requirement for correlation (information) between the two computations. I admit I find this pretty confusing, I’ll read your article on interpretation.
As the author of this article, I will reply to this, though it is hard to make much of a reply here, though. (I actually got here our of curiosity when I saw the site logs). I am, however, always pleased to discuss issues like this with people. One issue with this reply is that it is not just randomness we have to worry about. If we are basing a computational interpretation on randomness, yes, we may need to make the computational interpretation progressively more extreme, but Searle’s famous WordStar running in a wall example is just one example. We may not even have the computational interpretation based on randomness: it could conceivably be based on structure in something else, even though that structure would not be considered to be running the computer program except under a very forced interpretation. Where would we draw the line? Another point - why should it matter if we use a progressively more extreme interpretation? We might, for example, just want to say that a computation ran for 10 seconds, which relies on a fixed intertreptation (if a complex one), and what happens after that may not interest us. Where would we draw the line? Another issue is that the main argument had been about statistical issues with combining computers when considering probability issues—the whole thing had not been based on Searle—who would not take me any more seriously by the way.
We would draw the line where our good old friend mutual information comes in. If learning the results of the other phenomenon tells you something about the results of the algorithm you want to run, then there is mutual information, and the phenomenon counts as a (partial) implementation of the algorithm.
This is an approach I considered back in 1990something actually, and at the time I actually considered it correct. I get the idea. We say that the “finding algorithm” somehow detracts from what is running. The problem is, this does not leave a clearly defined algorithm as the one being found. if X is found by F, you might say that all that runs is a “partial version of X” and that X only exists when found by F. This, however, would not just apply to deeply hidden algorithms. I could equally well apply it your brain. I would have to run some sort of algorithm, F, on your brain to work out that some algorithm corresponding to you, X, is running. Clearly, that would be nothing like as severe as the extreme situations discussed in that article, but what does it mean for your status? Does it mean that the X corresponding to you does not exist? Are you “not all there” in some sense?
Here is a thought experiment:
A mind running in a VR system (suppose the two are one software package to make this easier) gradually encrypts itself. By this I mean that it goes through a series of steps, each intended to make it slightly more difficult to realize that the mind is there. There is no end to this. When does the mind cease to exist? When it is so hard to find that you would need a program as long as the one being hidden to find it? I say that is arbitrary.
You suggest that maybe the program running the mind just exists “partially” in some way, which I fully understand. What would the experience be like for the mind as the encryption gets more and more extreme? I say this causes issues, which are readily resolved if we simply say that the mind’s measure decreases.
I can also add a statistical issue to this, which I have not written up yet. (I have a lot to add on this subject. It may be obvious that I need to argue that this applies to everything, and not just minds, to avoid some weird kind of dualism.).
Suppose we have two simulations of you, running in VRs. One is about to look in a box and see a red ball. The other will see a blue ball. We subject the version that will see the blue ball to some process that makes it slightly harder to find. You don’t know which version you are. How much will you expect to see a blue ball when you look in the box? Do you say it is 50⁄50 that you will see a red ball or a blue ball? We keep increasing the “encryption” a bit each time I ask the question. If your idea that somehow the mind is only “partial” by needing the finding algorithm to find it is right, I suggest we end up with statistical incoherency. We can only say that the probability is 50⁄50 when the situations are exactly the same, but that will never be the case in any real situation. For any situation, one mind will need a bit more finding than the other.
In other words, if you think the length of the finding algorithm makes the algorithm running a mind somehow “partial”, in a statistical question in which you had two possibilities, one in which your mind was harder to find than the other, and you don’t know which situation you are in, when would you eliminate the “partial” mind as a possibility? If you say, “Never. As the encryption increases I would just say I am less and less likely to be in that situation” you have effectively agreed with me by adopting an approach where each mind is as valid as the other (you accept either as a candidate for your situation but treat them differently with regard to statistics—which is what I do). If you say that one mind cannot be a candidate for your situation then you have the issue of cut-off point. What cut-off point? When would you say, “This mind is real. This mind is only partial so cannot be a candidate for my experience. Therefore, I am the first mind?”
I would point out that I do not ignore these issues. I address them by using measure. I take the view that a mind which takes more finding exists with less measure, because a smaller proportion of the set of all possible algorithms that could be used to find something like it will find something like it.
Finally, this only deals with one issue. There is also the issue of combining computers in the statistical thought experiments that I mentioned in the first article of that series. My intention in that series is to try to show that these various issues demand that we take a particular view about minds and reality to maintain statistical coherency.
I’m running into trouble with the concept of “existence” as it’s being applied here. Surely existence of abstract information and processes must be relative to a chosen reference frame? The “possible algorithms” need to be specified relative to a chosen data set and initial condition, like “observable physical properties of Searle’s wall given sufficient locality”. Clearly an observer outside of our light cone couldn’t discern anything about the wall, regardless of algorithm.
An encrypted mind “existing less” doesn’t seem to carry any subjective consequences for the mind itself. What if a mind encrypts itself but shares the key with a few others? Wouldn’t its “existence” depend on whether or not the reference frame has access to the key?
If you’ve read it, I’m curious to know what you think of the “dust hypothesis” from Egan’s Permutation City in this context.
“Less measure” is only meant to be of significance statistically, not subjectively. For example, if you could exist in one of two ways, one with measure X and one with measure of 0.001X, I would say you should think it more likely you are in the first situation. In other words, I am agreeing (if you are arguing for this) that there should be no subjective difference for the mind in the extreme situation. I just think we should think that that situation corresponds to “less” observers in some way.
My own argument is actually a justification of something a bit like the dust hypothesis in “Permutation City”. However, there are some significant differences, so that analogy should not apply too much. I would say that the characters in Greg Egan’s novel undergo a huge decrease in measure, which could cause philosophical issues—though it would not feel different after it had happened to you.
I think we should consider this in terms of measure because there are “more ways to find you” in some situations than in others. It is almost like you have more minds in one situation than another—though there are no absolute numbers and really it should be considered in terms of density. If you want to see why I think measure is important, this first article may help: http://www.paul-almond.com/Substrate1.htm.
This seems tautological to me. Your measure needs to be defined relative to a given set of observers.
More ways for who to find you?
Very interesting piece. I’ll be thinking about the Mars colony scenario for a while. I do have a couple of immediate responses.
As long as the simulations are identical and interact identically (from the simulation’s point of view) with the external world, I don’t think the above question is meaningful. A mind doesn’t have a geographical location, only implementations of it embedded in a coordinate space do. So A, B, and C are not disjoint possibilities, which means probability mass isn’t split between them.
I see this the other way around. The more redundancy in a particular implementation, the more encodings of your own experiences you will expect to find embedded within your accessible reality, assuming you have causal access to the implementation-space. If you are causally disconnected from your implementation (e.g., run on hypothetical tamper-proof hardware without access to I/O), do you exist with measure zero? If you share your virtual environment with millions of other simulated minds with whom you can interact, do they all still exist with measure zero?
“As long as the simulations are identical and interact identically (from the simulation’s point of view) with the external world, I don’t think the above question is meaningful. A mind doesn’t have a geographical location, only implementations of it embedded in a coordinate space do. So A, B, and C are not disjoint possibilities, which means probability mass isn’t split between them.”
I dealt with this objection in the second article of the series. It would be easy to say that there are two simulations, in which slightly different things are going to happen. For example, we could have one simulation in which you are going to see a red ball when you open a box and one where you are going to see a blue ball. We could have lots of computers running the red ball situation and then combine them and discuss how this affects probability (if at all).
“The more redundancy in a particular implementation of a version you, then the more likely it is that that implementation is causing your experiences.”
Does this mean that if we had a billion identical simulations of you in a VR where you were about to see a red ball and one (different) simulation of you in a VR where you are about to see a blue ball, and all these were running on separate computers, and you did not know which situation you were in, you would not think it more likely you were going to see a red ball? (and I know a common answer here is that it is still 50⁄50 - that copies don’t count - which I can answer if you say that and which is addressed in the second article - I am just curious what you would say about that.)
″ see this the other way around. The more redundancy in a particular implementation, the more encodings of your own experiences you will expect to find embedded within your accessible reality, assuming you have causal access to the implementation-space. If you are causally disconnected from your implementation (e.g., run on hypothetical tamper-proof hardware without access to I/O), do you exist with measure zero? If you share your virtual environment with millions of other simulated minds with whom you can interact, do they all still exist with measure zero?”
I am not making any suggestion that there is any connection between measure, redundancy and whether or not you are connected to I/O. Whether you are connected to I/O does not interest me much. However, some particularly low measure situations may be hard to connect to I/O if they are associated with very extreme interpretations.
While this is also a valid and interesting scenario to consider, I don’t think it “deals with the objection”. The idea that “which computer am I running on?” is a meaningful question for someone whose experiences have multiple encodings in an environment seems pretty central to the discussion.
I actually don’t have a good answer to this, and the flavor of my confusion leads me to suspect the definitions involved. I think the word “you” in this context denotes something of an unnatural category. To consider the question of anticipating different experiences, I have to assume a specific self exists prior to copying. Are the subsequent experiences of the copies “mine” relative to this self? If so, then it is certain that “I” will experience both drawing a red ball and drawing a blue ball, and the question seems meaningless. I feel that I may be missing a simple counter-example here.
50⁄50 makes sense to me only as far it represents a default state of belief about a pair of mutually exclusive possibilities in the absence of any relevant information, but the exclusivity troubles me. I read objection 9, and I’m not bothered by the “strange” conclusion of sensitivity to minor alterations (perhaps this leads to contradictions elsewhere that I haven’t perceived?). I agree that counting algorithms is just a dressed-up version of counting machines, because the entire question is predicated on the algorithms being subjectively isomorphic (they’re only different in that some underlying physical or virtual machine is behaving differently to encode the same experience).
Of course, this leads to the problem of interpretation, which suggests to me that “information” and “algorithm” may be ill-defined concepts except in terms of one another. This is why I think I/O is important, because a mind may depend on a subjective environment to function. If this is the case, removal of the environment is basically removal of the mind. A mind of this sort, subjectively dependent on its own substrate, can be “destroyed” relative to observers of the environment, as they now have evidence for the following reasoning:
Mind M cannot logically exist except as self-observably embedded in environment E. So if E lacks such an encoding, M cannot exist.
I have observed E, and have sound reasons (local to E) to doubt the existence of a suitable encoding of M.
Therefore, M does not exist.
So far, this is the only substrate dependence argument I find convincing, but it requires the explicit dependence of M on E, which requires I/O.
“Are the subsequent experiences of the copies “mine” relative to this self? If so, then it is certain that “I” will experience both drawing a red ball and drawing a blue ball, and the question seems meaningless. I feel that I may be missing a simple counter-example here.”
No. Assume you have already been copied and you know you are one of the software versions. (Some proof of this has been provided). What you don’t know is whether you are in a red ball simulation or a blue ball simulation. You do know that there are a lot of (identical—in the digital sense) red ball simulations and one blue ball simulation. My view on this is that you should presume yourself more likely to be in the red ball simulation.
Some people say that the probability is 50⁄50 because copies don’t count. I would make these points:
sensitivity, which you clearly know about.
it is hard to say where each program starts and ends. For example, we could say that the room with each with red ball simulation computer in it is a simulation of a room with a red ball simulation computer in it—in other words, the physical environment around the computer could validly be considered part of the program. It is trivial to argue that a physical system is a valid simulation of itself. As each computer is going to be in a slightly different physical environment, it could be argued that this means that all the programs are different, even if the digital representation put into the box by the humans is the same. The natural tendency of humans is just to to focus on the 1s and 0s—which is just a preferred interpretation.
Humans may say that each program is “digitally” the same but we might interpret the data slightly differently. For example, one program run may have a voltage of 11.964V in a certain switch at a certain time. Another program run may have a voltage of 11.985V to represent the same binary value. It could be argued that this makes them different programs, each of which is simulating a computer with an uploaded mind on it with different voltages in the switches (again, using the idea that a thing is also a computer simulation of that thing if we are going to start counting simulations).
I just think that when we try to go for 50⁄50 (copies don’t count) we can get into a huge mess that a lot of people can miss. While I don’t think you agree with me, I think maybe you can see this mess.
“While this is also a valid and interesting scenario to consider, I don’t think it “deals with the objection”. The idea that “which computer am I running on?” is a meaningful question for someone whose experiences have multiple encodings in an environment seems pretty central to the discussion.”
I think the suggested scenario makes it meaningful. There is also the issue of turning off some of the machines. If you know you are running on a billipn identical machines, and that 90% of them are about to be turned off then it could then become an important issue for you. It would make things very similar to what is regarded as “quantum suicide”.
We can also consider another situation:
You have a number of computers, all running the same program, and something in the external world is going to affect these computers, for example a visitor from the outside world will “login” and visit you—we could discuss the probability of meeting the visitor while the simulations are all identical.
“This is why I think I/O is important, because a mind may depend on a subjective environment to function. If this is the case, removal of the environment is basically removal of the mind.”
I don’t know if I fully understood that—are you suggesting that a reclusive AI or uploaded brain simulation would not exist as a conscious entity?
As you asked me about Permutation City (Greg Egan’s novel) before, I will elaborate on that a bit.
The “dust hypothesis” in Permutation City was the idea that all the bits of reality could be stuck together in different ways, to get different universe. The idea here is that every interpretation of an object, or part of an object, that can be made, in principle, by an interpretative algorithm, exists as an object in its own right. This argument applies it to minds, but I would clearly have to claim it applies to everything to avoid being some kind of weird dualist. It is therefore a somewhat more general view. Egan’s cosmology requires a universe to exist to get scrambled up in different ways. With a view like this, you don’t need to assume anything exists. While a lot of people would find this counter-intuitive, if you accept that interpretations that produce objects produce real objects, there is nothing stopping you producing an object by interpreting very little data, or no data at all. In this kind of view, even if you had nothing except logic, interpretation algorithms that could be applied in principle with no input—on nothing at all—would still describe objects, which this kind of cosmology would say would have to exist as abstractions of nothing. Further objects would exist that would be abstractions of these. In other words, if we take the view that every abstraction of any object physically exists as a definition of the idea of physical existence, it makes the existence of a physical reality mandatory.
“Of course, this leads to the problem of interpretation, which suggests to me that “information” and “algorithm” may be ill-defined concepts except in terms of one another. This is why I think I/O is important, because a mind may depend on a subjective environment to function.”
and I simply take universal realizability at face value. That is my response to this kind of issue. It frees me totally from any concerns about consistency—and the use of measure even makes things statistically predictable.
Ah, this does more precisely address the issue. However, I don’t think it changes my inconclusive response. As my subjective experiences are still identical up until the ball is drawn, I don’t identify exclusively with either substrate and still anticipate a future where “I” experience both possibilities.
If this is accepted, it seems to rule out the concept of identity altogether, except as excruciatingly defined over specific physical states, with no reliance on a more general principle.
Maybe sometimes, but not always. The digital interpretation can come into the picture if the mind in question is capable of observing a digital interpretation of its own substrate. This relies on the same sort of assumption as my previous example involving self-observability.
I’m not sure if we’re thinking of the same mess. It seems to me the mess arises from the assumptions necessary to invoke probability, but I’m willing to be convinced of the validity of a probabilistic resolution.
They do seem similar. The major difference I see is that quantum suicide (or its dust analogue, Paul Durham running a lone copy and then shutting it down) produces near-certainty in the existence of an environment you once inhabited, but no longer do. Shutting down extra copies with identical subjective environments produces no similar outcome. The only difference it makes is that you can find fewer encodings of yourself in your environment.
The visitor scenario seems isomorphic to the red ball scenario. Both outcomes are guaranteed to occur.
No, I was pointing out the only example I could synthesize where substrate dependence made sense to me. A reclusive AI or isolated brain simulation by definition doesn’t have access to the environment containing its substrate, so I can’t see what substrate dependence even means for them.
I don’t think I followed this. Doesn’t any definition of the idea of physical existence mandate a physical reality?
I still don’t see where you get statistics out of universal realizability. It seems to imply that observers require arbitrary information about a system in order to interpret that system as performing a computation, but if the observers themselves are defined to be computations, the “universality” is at least constrained by the requirement for correlation (information) between the two computations. I admit I find this pretty confusing, I’ll read your article on interpretation.