True, ‘terminates’ is probably the wrong word. There’s no reason why the simulation would be wiped. It just couldn’t continue.
I was thinking more like a random power surge, programming error,or political coup within our simulation that happened to shut down the aspect of our program that was hogging resources. If the programmers want the program to continue, it can.
I think the trilemma applies to a simulation of a single actor, if that actor decides to launch simulations of their own life.
The single actor is not going to experience every aspect of the simulation in full fidelity, so a low-res simulation is all that is needed. (The actor might think that it is a full simulation, and may have correctly programmed a full simulation, but there is simply no reason for it to actually replicate either the whole universe or the whole actor, as long as it gives output that looks valid).
I was thinking more like a random power surge, programming error,or political coup within our simulation that happened to shut down the aspect of our program that was hogging resources. If the programmers want the program to continue, it can.
You’re right—branch (2) should be “we don’t keep running run more than one”. We can launch as many as we like.
The single actor is not going to experience every aspect of the simulation in full fidelity, so a low-res simulation is all that is needed. (The actor might think that it is a full simulation, and may have correctly programmed a full simulation, but there is simply no reason for it to actually replicate either the whole universe or the whole actor, as long as it gives output that looks valid).
That would buy you some time. If a single-agent simulation is say 10^60 times cheaper than a whole universe (roughly the number of elementary particles in the observable universe ?), then that gives you about 200 doubling generations before those single-agent simulations cost as much as much as a universe.
Unless the space of all practically different possible lives of the agent is actually much smaller … maybe your choices don’t matter that much and you end up playing out a relatively small number or attractor scripts. You might be able to map out that space efficiently with some clever dynamic programming.
My thought was that if a simulation that centered around a single individual had a simulation running within it, the simulation would only need to be convincing enough to appear real to that one person. Even if the nested simulation runs a third level simulation within it, or if the one individual runs two simulations, aren’t you still basically exploring the idea space of that one individual? That is, me running a simulation and experiencing it through virtual reality is limited in cognitive/sensory scope and fidelity to the qualia that I can experience and the mental processes that I can cope with… which may still be very impressive from my point of view, but the computational power required to present the simulation can’t be much more complex than the computational power required to render my brain states in the base simulation. I may simulate a universe with very different rules, but these rules are by definition consistent with a full rendering of my concept space; I may experience new sensory inputs (if I use VR), but I won’t be experiencing new senses.… and what I experience through VR replaces, rather than adds to, what I would have experienced in the base simulation.
Even in the worst case scenario that I build 1000+ simulations, they only have to run for the time that I check on them. The more time I spend programming them and checking that they are rendering what they should, the less time I have to do additional simulations. This seems at worst an arithmetic progression.
Of course, if I were specifically trying to crash the simulation that I was in, I might come up with some physical laws that would eat up a lot of processing power to calculate for even one person’s local space, but between the limitations of computing as they exist in the base simulation, the difficulty in confirming that these laws have been properly executed in all of their fully-complex glory, and the fact that if it worked, I would never know, I’m not sure that that is a significant risk.
Oh, I think I see what you mean. No matter how many or how detailed the simulations you run, if your purpose is to learn something from watching them, then ultimately you are limited by your own ability to observe and process what you see.
Whoever is simulating you only has to run the simulations that you launch to the level of fidelity such that you can’t tell if they’ve taken shortcuts. The deeper the nested simulation people are, the harder it is for you to pay attention to them all, and the coarser their simulations can be.
If you are running simulations to answer psychological questions, that should work. And if you are running simulations to answer physics questions … why would you fill them with conscious people ?
Of course, if I were specifically trying to crash the simulation that I was in, I might come up with some physical laws that would eat up a lot of processing power to calculate for even one person’s local space, but between the limitations of computing as they exist in the base simulation, the difficulty in confirming that these laws have been properly executed in all of their fully-complex glory
I was going to say that if you want to be a pain you could launch some NP hard problems that you can manually verify solutions to with a pencil and paper … except your simulators control your random-number generators.
I was thinking more like a random power surge, programming error,or political coup within our simulation that happened to shut down the aspect of our program that was hogging resources. If the programmers want the program to continue, it can.
The single actor is not going to experience every aspect of the simulation in full fidelity, so a low-res simulation is all that is needed. (The actor might think that it is a full simulation, and may have correctly programmed a full simulation, but there is simply no reason for it to actually replicate either the whole universe or the whole actor, as long as it gives output that looks valid).
You’re right—branch (2) should be “we don’t keep running run more than one”. We can launch as many as we like.
That would buy you some time. If a single-agent simulation is say 10^60 times cheaper than a whole universe (roughly the number of elementary particles in the observable universe ?), then that gives you about 200 doubling generations before those single-agent simulations cost as much as much as a universe.
Unless the space of all practically different possible lives of the agent is actually much smaller … maybe your choices don’t matter that much and you end up playing out a relatively small number or attractor scripts. You might be able to map out that space efficiently with some clever dynamic programming.
My thought was that if a simulation that centered around a single individual had a simulation running within it, the simulation would only need to be convincing enough to appear real to that one person. Even if the nested simulation runs a third level simulation within it, or if the one individual runs two simulations, aren’t you still basically exploring the idea space of that one individual? That is, me running a simulation and experiencing it through virtual reality is limited in cognitive/sensory scope and fidelity to the qualia that I can experience and the mental processes that I can cope with… which may still be very impressive from my point of view, but the computational power required to present the simulation can’t be much more complex than the computational power required to render my brain states in the base simulation. I may simulate a universe with very different rules, but these rules are by definition consistent with a full rendering of my concept space; I may experience new sensory inputs (if I use VR), but I won’t be experiencing new senses.… and what I experience through VR replaces, rather than adds to, what I would have experienced in the base simulation.
Even in the worst case scenario that I build 1000+ simulations, they only have to run for the time that I check on them. The more time I spend programming them and checking that they are rendering what they should, the less time I have to do additional simulations. This seems at worst an arithmetic progression.
Of course, if I were specifically trying to crash the simulation that I was in, I might come up with some physical laws that would eat up a lot of processing power to calculate for even one person’s local space, but between the limitations of computing as they exist in the base simulation, the difficulty in confirming that these laws have been properly executed in all of their fully-complex glory, and the fact that if it worked, I would never know, I’m not sure that that is a significant risk.
Oh, I think I see what you mean. No matter how many or how detailed the simulations you run, if your purpose is to learn something from watching them, then ultimately you are limited by your own ability to observe and process what you see.
Whoever is simulating you only has to run the simulations that you launch to the level of fidelity such that you can’t tell if they’ve taken shortcuts. The deeper the nested simulation people are, the harder it is for you to pay attention to them all, and the coarser their simulations can be.
If you are running simulations to answer psychological questions, that should work. And if you are running simulations to answer physics questions … why would you fill them with conscious people ?
I was going to say that if you want to be a pain you could launch some NP hard problems that you can manually verify solutions to with a pencil and paper … except your simulators control your random-number generators.
Exactly—and you expressed it better than I could.