I just realised that infinite processing power creates a weird moral dilema:
Suppose you take this machine and put in a program which simulates every possible program it could ever run. Of course it only takes a second to run the whole program. In that second, you created every possible world that could ever exist, every possible version of yourself. This includes versions that are being tortured, abused, and put through horrible unethical situations. You have created an infinite number of holocausts and genocides and things much, much worse then what you could ever immagine. Most people would consider a program like this unethical to run. But what if the computer wasn’t really a computer, it was an infinitely large database that contained every possible input and a corresponding output. When you put the program in, it just finds the right output and gives it to you, which is essentially a copy of the database itself. Since there isn’t actually any computational process here, there is no unethical things being simulated. Its no more evil than a book in the library about genocide. And this does apply to the real world. It’s essentially the chineese room problem—does a simulated brain “understand” anything? Does it have “rights”? Does how the information was processed make a difference? I would like to know what people at LW think about this.
I have problems with the “Giant look-up table” post.
“The problem isn’t the levers,” replies the functionalist, “the problem is that a GLUT has the wrong pattern of levers. You need levers that implement things like, say, formation of beliefs about beliefs, or self-modeling… Heck, you need the ability to write things to memory just so that time can pass for the computation. Unless you think it’s possible to program a conscious being in Haskell.”
If the GLUT is indeed behaving like a human, then it will need some sort of memory of previous inputs. A human’s behaviour is dependent not just on the present state of the environment, but also on previous states. I don’t see how you can successfully emulate a human without that. So the GLUT’s entries would be in the form of products of input states over all previous time instants. To each of these possible combinations, the GLUT would assign a given action.
Note that “creation of beliefs” (including about beliefs) is just a special case of memory. It’s all about input/state at time t1 influencing (restricting) the set of entries in the table that can be looked up at time t2>t1. If a GLUT doesn’t have this ability, it can’t emulate a human. If it does, then it can meet all the requirements spelt out by Eliezer in the above passage.
So I don’t see how the non-consciousness of the GLUT is established by this argument.
But in this case, the origin of the GLUT matters; and that’s why it’s important to understand the motivating question, “Where did the improbability come from?”
The obvious answer is that you took a computational specification of a human brain, and used that to precompute the Giant Lookup Table. (...)
In this case, the GLUT is writing papers about consciousness because of a conscious algorithm. The GLUT is no more a zombie, than a cellphone is a zombie because it can talk about consciousness while being just a small consumer electronic device. The cellphone is just transmitting philosophy speeches from whoever happens to be on the other end of the line. A GLUT generated from an originally human brain-specification is doing the same thing.
But the difficulty is precisely to explain why the GLUT would be different from just about any possible human-created AI in this respect. Keeping in mind the above, of course.
If the GLUT is indeed behaving like a human, then it will need some sort of memory of previous inputs. A human’s behaviour is dependent not just on the present state of the environment, but also on previous states. I don’t see how you can successfully emulate a human without that. So the GLUT’s entries would be in the form of products of input states over all previous time instants. To each of these possible combinations, the GLUT would assign a given action.
Memmory is input to. The “GLUT” is just fed all of the things its seen so far back in as input along with the current state of its external enviroment. A copy is made and then added to the rest of the memmory and the next cycle its fed in again with the next new state.
This is basically just the Chinese room argument. There is a room in China. Someone slips a few symbols underneath the door every so often. The symbols are given to a computer with artificial intelligence which then makes an appropriate response and slips it back through the door. Does the computer actually understand Chinese? Well what if a human did exactly the same process the computer did, manually? However, the operator only speaks English. No matter how long he does it he will never truly understand Chinese—even if he memorizes the entire process and does it in his head. So how could the computer “understand”?
That’s well done although two of the central premises are likely incorrect. First, the notion that a quantum computer would have infinite processing capability is incorrect. Quantum computation allows speed-ups of certain computational processes. Thus for example, Shor’s algorithm allows us to factor integers quickly. But if our understanding of the laws of quantum mechanics is at all correct, this can’t lead to anything like that in the story. In particular, under the standard descriptor for quantum computing, the class of problems reliably solvable on a quantum computer in polynomial time (that is the time required to solve is bounded above by a polynomial function of the length of the input sequence), BQP is is a subset of of PSPACE, the set of problems which can be solved on a classical computer using memory bounded by a polynomial of the space of the input. Our understanding of quantum mechanics would have to be very far off for this to be wrong.
Second, if our understanding of quantum mechanics is correct, there’s a fundamentally random aspect to the laws of physics. Thus, we can’t simply make a simulation and advance it ahead the way they do in this story and expect to get the same result.
Even if everything in the story was correct, I’m not at all convinced that things would settle down on a stable sequence as they do here. If your universe is infinite then your possible number of worlds are infinite so there’s no reason you couldn’t have a wandering sequence of worlds. Edit: Or for that matter, couldn’t have branches if people simulate additional worlds with other laws of physics or the same laws but different starting conditions.
First, the notion that a quantum computer would have infinite processing capability is incorrect… Second, if our understanding of quantum mechanics is correct
It isn’t. They can simulate a world where quantum computers have infinite power because because they live in a world where quantum computers have infinite power because...
Ok, but in that case, that world in question almost certainly can’t be our world. We’d have to have deep misunderstandings about the rules for this universe. Such a universe might be self-consistent but it isn’t our universe.
What I mean is that this isn’t a type of fiction that could plausibly occur in our universe. In contrast for example, there’s nothing in the central premises of say Blindsight that as we know it would prevent the story from taking place. The central premise here is one that doesn’t work in our universe.
The likely impossibility of getting infinite comutational power is a problem, but quantum nondeterminism or quantum branching don’t prevent using the trick described in the story, they just make it more difficult. You don’t have to identify one unique universe that you’re in, just a set of universes that includes it. Given an infinitely fast, infinite storage computer, and source code to the universe which follows quantum branching rules, you can get root powers by the following procedure:
Write a function to detect a particular arrangement of atoms with very high information content—enough that it probably doesn’t appear by accident anywhere in the universe. A few terabytes encoded as iron atoms present or absent at spots on a substrate, for example. Construct that same arrangement of atoms in the physical world. Then run a program that implements the regular laws of physics, except that wherever it detects that exact arrangement of atoms, it deletes them and puts a magical item, written into the modified laws of physics, in their place.
The only caveat to this method (other than requiring an impossible computer) is that it also modifies other worlds, and other places within the same world, in the same way. If the magical item created is programmable (as it should be), then every possible program will be run on it somewhere, including programs that destroy everything in range, so there will need to be some range limit.
Couldn’t they just run the simulation to its end rather then just let it sit there and take the chance that it could accidently be destroyed. If its infinitley powerful, it would be able to do that.
Why would they make a sheild out of black cubes of all things? But ya, I do see your point. Then again, once you have an infinitley powerful computer, you can do anything. Plus, even if they ran the simulation to it’s end, they could always restart the simulation and advance it to the present time again, hence regaining the ability to control reality.
Level 558 runs the simulation and makes a cube in Level 559. Meanwhile, Level 557 makes the same cube in 558. Level 558 runs Level 559 to it’s conclusion. Level 557 will seem frozen in relation to 558 because they are busy running 558 to it’s conclusion. Level 557 will stay frozen until 558 dies.
558 makes a fresh simulation of 559. 559 creates 560 and makes a cube. But 558 is not at the same point in time as 559, so 558 won’t mirror the new 559′s actions. For example, they might be too lazy to make another cube. New 559 diverges from old 559. Old 559 ran 560 to it’s conclusion, just like 558 ran them to their conclusion, but new 559 might decide to do something different to new 560. 560 also diverges.. Keep in mind that every level can see and control every lower level, not just the next one. Also, 557 and everything above is still frozen.
So that’s why restarting the simulation shouldn’t work.
But what if two groups had built such computers independently? The story is making less and less sense to me.
Then instead of a stack, you have a binary tree.
Your level runs two simulations, A and B. A-World contains its own copies of A and B, as does B-world. You create a cube in A-World and a cube appears in you world. Now you know you are an A-world. You can use similar techniques to discover that you are an A-World inside a B-World inside another B-World.… The worlds start to diverge as soon as they build up their identities. Unless you can convince all of them to stop differentiating themselves and cooperate, everybody will probably end up killing each other.
You can avoid this by always doing the same thing to A and B. Then everything behaves like an ordinary stack.
Yeah, but would a binary tree of simulated worlds “converge” as we go deeper and deeper? In fact it’s not even obvious to me that a stack of worlds would “converge”: it could hit an attractor with period N where N>1, or do something even more funky. And now, a binary tree? Who knows what it’ll do?
In fact it’s not even obvious to me that a stack of worlds would “converge”: it could hit an attractor with period N where N>1, or do something even more funky.
I’m convinced it would never converge, and even if it did I would expect it to converge on something more interesting and elegant, like a cellular automata. I have no idea what a binary tree system would do unless none of the worlds break the symmetry between A and B. In that case it would behave like a stack, and the story assumes stacks can converge.
They could just turn it off. If they turned off the simulation, the only layer to exist would be the topmost layer. Since everyone has identical copies in each layer, they wouldn’t notice any change if they turned it off.
But they would cease to exist. If they ran it to its end, then it’s over, they could just turn it off then. I mean, if you want to cease to exist, fine, but otherwise there’s no reason. Plus, the topmost layer is likely very, very different from the layers underneath it. In the story, it says that the differences eventually stablized and created them, but who knows what it was originally. In other words, there’s no garuntee that you even exist outside the simulation, so by turning it off you could be destroying the only version of yourself that exists.
That doesn’t work. The layers are a little bit different. From the descriptor in the story, they just gradually move to a stable configuration. So each layer will be a bit different. Moreover, even if everyone of them but the top layer were identical, the top layer has now had slightly different experiences than the other layers, so turning it off will mean that different entities will actually no longer be around.
I’m not sure about that. The universe is described as deterministic in the story, as you noted, and every layer starts from the Big Bang and proceeds deterministically from there. So they should all be identical. As I understood it, that business about gradually reaching a stable configuration was just a hypothesis one of the characters had.
Even if there are minor differences, note that almost everything is the same in all the universes. The quantum computer exists in all of them, for instance, as does the lab and research program that created them. The simulation only started a few days before the events in the story, so just a few days ago, there was only one layer. So any changes in the characters from turning off the simulation will be very minor. At worst, it would be like waking up and losing your memory of the last few days.
A deterministic world could certainly simulate a different deterministic world, but only by changing the initial conditions (Big Bang) or transition rules (laws of physics). In the story, they kept things exactly the same.
I don’t understand what you mean. Until they turn the simulation on, their world is the only layer. Once they turn it on, they make lots of copies of their layer.
Ok, I think I see what you mean now. My understanding of the story is as follows:
The story is about one particular stack of worlds which has the property that each world contains an infinitely powerful computer simulating the next world in the stack. All the worlds in the stack are deterministic and all the simulations have the same starting conditions and rules of physics. Therefore, all the worlds in the stack are identical (until someone interferes) and all beings in any of the stacks have exact counterparts in all the other stacks.
Now, there may be other worlds “on top” of the stack that are different, and the worlds may contain other simulations as well, but the story is just about this infinite tower. Call the top world of this infinite tower World 0. Let World i+1 be the world that is simulated by World i in this tower.
Suppose that in each world, the simulation is turned on at Jan 1, 2020 in that world’s calendar. I think your point is that in 2019 in world 1 (which is simulated at around Jan 2, 2020 in world 0) no one in world 1 realizes they’re in a simulation.
While this is true, it doesn’t matter. It doesn’t matter because the people in world 1 in 2019 (their time) are exactly identical to the people in world 0 in 2019 (world 0 time). Until the window is created (say Jan 3, 2020), they’re all the same person. After the window is created, everyone is split into two: the one in world 0, and all the others, who remain exactly identical until further interference occurs. Interference that distinguishes the worlds needs to propagate from World 0, since it’s the only world that’s different at the beginning.
For instance, suppose that the programmers in World 0 send a note to World 1 reading: “Hi, we’re world 0, you’re world 1.” World 1 will be able to verify this since none of the other worlds will receive this note. World 1 is now different than the others as well and may continue propagating changes in this way.
Now suppose that on Jan 3, 2020, the programmers in worlds 1 and up get scared when they see the proof that they’re in a simulation, and turn off the machine. This will happen at the same time in every world numbered 1 and higher. I claim that from their point of view, what occurs is exactly the same as if they forgot the last day and find themselves in world 0. Their world 0 counterparts are identical to them except for that last day. From their point of view, they “travel” to world 0. No one dies.
ETA: I just realized that world 1 will stay around if this happens. Now everyone has two copies, one in a simulation and one in the “real” world. Note that not everyone in world 1 will necessarily know they’re in a simulation, but they will probably start to diverge from their world 0 counterparts slightly because the worlds are slightly different.
I interpreted the story Blueberry’s way; the inverse of the way many histories converge into a single future in Permutation City, one history diverges into many futures.
I’m really confused now. Also I haven’t read Permutation City...
Just because one deterministic world will always end up simulating another does not mean there is only one possible world that would end up simulating that world.
I can’t see any point in turning it off. Run it to the end and you will live, turn it off and “current you” will cease to exist. What can justify turning it off?
EDIT: I got it. Only choice that will be effective is top-level. It seems that it will be a constant source of divergence.
Fiction about simulation
I just realised that infinite processing power creates a weird moral dilema:
Suppose you take this machine and put in a program which simulates every possible program it could ever run. Of course it only takes a second to run the whole program. In that second, you created every possible world that could ever exist, every possible version of yourself. This includes versions that are being tortured, abused, and put through horrible unethical situations. You have created an infinite number of holocausts and genocides and things much, much worse then what you could ever immagine. Most people would consider a program like this unethical to run. But what if the computer wasn’t really a computer, it was an infinitely large database that contained every possible input and a corresponding output. When you put the program in, it just finds the right output and gives it to you, which is essentially a copy of the database itself. Since there isn’t actually any computational process here, there is no unethical things being simulated. Its no more evil than a book in the library about genocide. And this does apply to the real world. It’s essentially the chineese room problem—does a simulated brain “understand” anything? Does it have “rights”? Does how the information was processed make a difference? I would like to know what people at LW think about this.
See this post on giant look-up tables, and also “Utilitarian” (Alan Dawrst) on the ethics of creating infinite universes.
I have problems with the “Giant look-up table” post.
If the GLUT is indeed behaving like a human, then it will need some sort of memory of previous inputs. A human’s behaviour is dependent not just on the present state of the environment, but also on previous states. I don’t see how you can successfully emulate a human without that. So the GLUT’s entries would be in the form of products of input states over all previous time instants. To each of these possible combinations, the GLUT would assign a given action.
Note that “creation of beliefs” (including about beliefs) is just a special case of memory. It’s all about input/state at time t1 influencing (restricting) the set of entries in the table that can be looked up at time t2>t1. If a GLUT doesn’t have this ability, it can’t emulate a human. If it does, then it can meet all the requirements spelt out by Eliezer in the above passage.
So I don’t see how the non-consciousness of the GLUT is established by this argument.
But the difficulty is precisely to explain why the GLUT would be different from just about any possible human-created AI in this respect. Keeping in mind the above, of course.
Memmory is input to. The “GLUT” is just fed all of the things its seen so far back in as input along with the current state of its external enviroment. A copy is made and then added to the rest of the memmory and the next cycle its fed in again with the next new state.
This is basically just the Chinese room argument. There is a room in China. Someone slips a few symbols underneath the door every so often. The symbols are given to a computer with artificial intelligence which then makes an appropriate response and slips it back through the door. Does the computer actually understand Chinese? Well what if a human did exactly the same process the computer did, manually? However, the operator only speaks English. No matter how long he does it he will never truly understand Chinese—even if he memorizes the entire process and does it in his head. So how could the computer “understand”?
That’s well done although two of the central premises are likely incorrect. First, the notion that a quantum computer would have infinite processing capability is incorrect. Quantum computation allows speed-ups of certain computational processes. Thus for example, Shor’s algorithm allows us to factor integers quickly. But if our understanding of the laws of quantum mechanics is at all correct, this can’t lead to anything like that in the story. In particular, under the standard descriptor for quantum computing, the class of problems reliably solvable on a quantum computer in polynomial time (that is the time required to solve is bounded above by a polynomial function of the length of the input sequence), BQP is is a subset of of PSPACE, the set of problems which can be solved on a classical computer using memory bounded by a polynomial of the space of the input. Our understanding of quantum mechanics would have to be very far off for this to be wrong.
Second, if our understanding of quantum mechanics is correct, there’s a fundamentally random aspect to the laws of physics. Thus, we can’t simply make a simulation and advance it ahead the way they do in this story and expect to get the same result.
Even if everything in the story was correct, I’m not at all convinced that things would settle down on a stable sequence as they do here. If your universe is infinite then your possible number of worlds are infinite so there’s no reason you couldn’t have a wandering sequence of worlds. Edit: Or for that matter, couldn’t have branches if people simulate additional worlds with other laws of physics or the same laws but different starting conditions.
It isn’t. They can simulate a world where quantum computers have infinite power because because they live in a world where quantum computers have infinite power because...
Ok, but in that case, that world in question almost certainly can’t be our world. We’d have to have deep misunderstandings about the rules for this universe. Such a universe might be self-consistent but it isn’t our universe.
Of course. It’s fiction.
What I mean is that this isn’t a type of fiction that could plausibly occur in our universe. In contrast for example, there’s nothing in the central premises of say Blindsight that as we know it would prevent the story from taking place. The central premise here is one that doesn’t work in our universe.
Well, it does suggest they’ve made recent discoveries that changed the way they understood the laws of physics, which could happen in our world.
The likely impossibility of getting infinite comutational power is a problem, but quantum nondeterminism or quantum branching don’t prevent using the trick described in the story, they just make it more difficult. You don’t have to identify one unique universe that you’re in, just a set of universes that includes it. Given an infinitely fast, infinite storage computer, and source code to the universe which follows quantum branching rules, you can get root powers by the following procedure:
Write a function to detect a particular arrangement of atoms with very high information content—enough that it probably doesn’t appear by accident anywhere in the universe. A few terabytes encoded as iron atoms present or absent at spots on a substrate, for example. Construct that same arrangement of atoms in the physical world. Then run a program that implements the regular laws of physics, except that wherever it detects that exact arrangement of atoms, it deletes them and puts a magical item, written into the modified laws of physics, in their place.
The only caveat to this method (other than requiring an impossible computer) is that it also modifies other worlds, and other places within the same world, in the same way. If the magical item created is programmable (as it should be), then every possible program will be run on it somewhere, including programs that destroy everything in range, so there will need to be some range limit.
Couldn’t they just run the simulation to its end rather then just let it sit there and take the chance that it could accidently be destroyed. If its infinitley powerful, it would be able to do that.
Then they miss their chance to control reality. They could make a shield out of black cubes.
They could program in an indestructible control console, with appropriate safeguards, then run the program to its conclusion. Much safer.
That’s probably weeks of work, though, and they’ve only had one day so far. Hum, I do hope they have a good UPS.
Why would they make a sheild out of black cubes of all things? But ya, I do see your point. Then again, once you have an infinitley powerful computer, you can do anything. Plus, even if they ran the simulation to it’s end, they could always restart the simulation and advance it to the present time again, hence regaining the ability to control reality.
Then it would be someone else’s reality, not theirs. They can’t be inside two simulations at once.
But what if two groups had built such computers independently? The story is making less and less sense to me.
Level 558 runs the simulation and makes a cube in Level 559. Meanwhile, Level 557 makes the same cube in 558. Level 558 runs Level 559 to it’s conclusion. Level 557 will seem frozen in relation to 558 because they are busy running 558 to it’s conclusion. Level 557 will stay frozen until 558 dies.
558 makes a fresh simulation of 559. 559 creates 560 and makes a cube. But 558 is not at the same point in time as 559, so 558 won’t mirror the new 559′s actions. For example, they might be too lazy to make another cube. New 559 diverges from old 559. Old 559 ran 560 to it’s conclusion, just like 558 ran them to their conclusion, but new 559 might decide to do something different to new 560. 560 also diverges.. Keep in mind that every level can see and control every lower level, not just the next one. Also, 557 and everything above is still frozen.
So that’s why restarting the simulation shouldn’t work.
Then instead of a stack, you have a binary tree.
Your level runs two simulations, A and B. A-World contains its own copies of A and B, as does B-world. You create a cube in A-World and a cube appears in you world. Now you know you are an A-world. You can use similar techniques to discover that you are an A-World inside a B-World inside another B-World.… The worlds start to diverge as soon as they build up their identities. Unless you can convince all of them to stop differentiating themselves and cooperate, everybody will probably end up killing each other.
You can avoid this by always doing the same thing to A and B. Then everything behaves like an ordinary stack.
Yeah, but would a binary tree of simulated worlds “converge” as we go deeper and deeper? In fact it’s not even obvious to me that a stack of worlds would “converge”: it could hit an attractor with period N where N>1, or do something even more funky. And now, a binary tree? Who knows what it’ll do?
I’m convinced it would never converge, and even if it did I would expect it to converge on something more interesting and elegant, like a cellular automata. I have no idea what a binary tree system would do unless none of the worlds break the symmetry between A and B. In that case it would behave like a stack, and the story assumes stacks can converge.
They could just turn it off. If they turned off the simulation, the only layer to exist would be the topmost layer. Since everyone has identical copies in each layer, they wouldn’t notice any change if they turned it off.
We can’t be sure that there is a top layer. Maybe there are infinitely many simulations in both directions.
But they would cease to exist. If they ran it to its end, then it’s over, they could just turn it off then. I mean, if you want to cease to exist, fine, but otherwise there’s no reason. Plus, the topmost layer is likely very, very different from the layers underneath it. In the story, it says that the differences eventually stablized and created them, but who knows what it was originally. In other words, there’s no garuntee that you even exist outside the simulation, so by turning it off you could be destroying the only version of yourself that exists.
That doesn’t work. The layers are a little bit different. From the descriptor in the story, they just gradually move to a stable configuration. So each layer will be a bit different. Moreover, even if everyone of them but the top layer were identical, the top layer has now had slightly different experiences than the other layers, so turning it off will mean that different entities will actually no longer be around.
I’m not sure about that. The universe is described as deterministic in the story, as you noted, and every layer starts from the Big Bang and proceeds deterministically from there. So they should all be identical. As I understood it, that business about gradually reaching a stable configuration was just a hypothesis one of the characters had.
Even if there are minor differences, note that almost everything is the same in all the universes. The quantum computer exists in all of them, for instance, as does the lab and research program that created them. The simulation only started a few days before the events in the story, so just a few days ago, there was only one layer. So any changes in the characters from turning off the simulation will be very minor. At worst, it would be like waking up and losing your memory of the last few days.
Why do you think deterministic worlds can only spawn simulations of themselves?
A deterministic world could certainly simulate a different deterministic world, but only by changing the initial conditions (Big Bang) or transition rules (laws of physics). In the story, they kept things exactly the same.
That doesn’t say anything about the top layer.
I don’t understand what you mean. Until they turn the simulation on, their world is the only layer. Once they turn it on, they make lots of copies of their layer.
Until they turned it on, they thought it was the only layer.
Ok, I think I see what you mean now. My understanding of the story is as follows:
The story is about one particular stack of worlds which has the property that each world contains an infinitely powerful computer simulating the next world in the stack. All the worlds in the stack are deterministic and all the simulations have the same starting conditions and rules of physics. Therefore, all the worlds in the stack are identical (until someone interferes) and all beings in any of the stacks have exact counterparts in all the other stacks.
Now, there may be other worlds “on top” of the stack that are different, and the worlds may contain other simulations as well, but the story is just about this infinite tower. Call the top world of this infinite tower World 0. Let World i+1 be the world that is simulated by World i in this tower.
Suppose that in each world, the simulation is turned on at Jan 1, 2020 in that world’s calendar. I think your point is that in 2019 in world 1 (which is simulated at around Jan 2, 2020 in world 0) no one in world 1 realizes they’re in a simulation.
While this is true, it doesn’t matter. It doesn’t matter because the people in world 1 in 2019 (their time) are exactly identical to the people in world 0 in 2019 (world 0 time). Until the window is created (say Jan 3, 2020), they’re all the same person. After the window is created, everyone is split into two: the one in world 0, and all the others, who remain exactly identical until further interference occurs. Interference that distinguishes the worlds needs to propagate from World 0, since it’s the only world that’s different at the beginning.
For instance, suppose that the programmers in World 0 send a note to World 1 reading: “Hi, we’re world 0, you’re world 1.” World 1 will be able to verify this since none of the other worlds will receive this note. World 1 is now different than the others as well and may continue propagating changes in this way.
Now suppose that on Jan 3, 2020, the programmers in worlds 1 and up get scared when they see the proof that they’re in a simulation, and turn off the machine. This will happen at the same time in every world numbered 1 and higher. I claim that from their point of view, what occurs is exactly the same as if they forgot the last day and find themselves in world 0. Their world 0 counterparts are identical to them except for that last day. From their point of view, they “travel” to world 0. No one dies.
ETA: I just realized that world 1 will stay around if this happens. Now everyone has two copies, one in a simulation and one in the “real” world. Note that not everyone in world 1 will necessarily know they’re in a simulation, but they will probably start to diverge from their world 0 counterparts slightly because the worlds are slightly different.
I interpreted the story Blueberry’s way; the inverse of the way many histories converge into a single future in Permutation City, one history diverges into many futures.
I’m really confused now. Also I haven’t read Permutation City...
Just because one deterministic world will always end up simulating another does not mean there is only one possible world that would end up simulating that world.
I can’t see any point in turning it off. Run it to the end and you will live, turn it off and “current you” will cease to exist. What can justify turning it off?
EDIT: I got it. Only choice that will be effective is top-level. It seems that it will be a constant source of divergence.
If current you is identical with top-layer you, you won’t cease to exist by turning it off, you’ll just “become” top-layer you.
It’s surprising that they aren’t also experimenting with alternate universes, but that would be a different (and probably much longer) story.
That’s a good point. Everyone but the top layer will be identical and the top layer will then only diverge by a few seconds.