If one can pause a simulation and run it backwards or make multiple copies of a simulation, then from our perspective for many purposes the simulators will be omniscient. There might be still some limits in that regard (for example if they are bound to only do computable operations then they will be limited in what math they can do.)
Also, if a simulator wants a specific outcome, and there’s some random aspect in the simulation (such as from quantum mechanical effects) they could run the simulation multiple times until they got a result they wanted.
Unless the superordinate reality is different in some very fundamental ways, it’s impossible to predict what happens in chaotic systems in our universe in advance with precision, without actually running the simulation,
This isn’t quite true. As I understand it, there are very few results asserting minimal computational complexity of chaotic systems. The primary problem with chaotic systems is that predicting their behavior becomes very difficult if one has anything less than perfect accuracy because very similar initial conditions s can diverge in long-term behavior. That doesn’t say much about how hard things are to compute if you have perfect information.
But running the simulation is running our reality. If they run multiple simulations with slight alterations to get the outcome they want, that’s many realities that actually occur which don’t achieve the results they want for every one that does. Likewise, rewinding the simulation may allow them to achieve the results they want, but it doesn’t prevent the events they don’t want from happening to us. Besides, there’s no evidence that our universe is being guided according to any agent’s utility function, and if it is, it’s certainly not much like ours.
This isn’t quite true. As I understand it, there are very few results asserting minimal computational complexity of chaotic systems. The primary problem with chaotic systems is that predicting their behavior becomes very difficult if one has anything less than perfect accuracy because very similar initial conditions s can diverge in long-term behavior. That doesn’t say much about how hard things are to compute if you have perfect information.
Chaotic systems are hard to project because small differences between the information in the system and the information in the model propagate to create large differences between the system and the model over time. To make the model perfectly accurate, it must follow all the same rules and contain all the same information. Projecting the simulation with perfect accuracy is equivalent to running the simulation.
Besides, there’s no evidence that our universe is being guided according to any agent’s utility function, and if it is, it’s certainly not much like ours.
The SA mechanism places many constraints on the creator. They exist in a universe like ours, they are similar to our future descendants, they created us for a reason, and their utility function, morality, what have you all evolved from a universe like ours.
You don’t run one simulation, you run many. There is no one single correct answer that the simulation is attempting to compute. It is a landscape, a multiverse, from which you sample.
But running the simulation is running our reality. If they run multiple simulations with slight alterations to get the outcome they want, that’s many realities that actually occur which don’t achieve the results they want for every one that does.
Sure, but think in terms of observers. From the perspective of the universe that the simulators end up keeping there’s only one universe, the one where the simulators got what they wanted.
Besides, there’s no evidence that our universe is being guided according to any agent’s utility function, and if it is, it’s certainly not much like ours.
Yes, you’ve made that point before. I don’t disagree with it. I’m not sure why you are bringing it up again.
Chaotic systems are hard to project because small differences between the information in the system and the information in the model propagate to create large differences between the system and the model over time. To make the model perfectly accurate, it must follow all the same rules and contain all the same information.
It must contain the same information. It doesn’t need to contain the same rules.
Projecting the simulation with perfect accuracy is equivalent to running the simulation.
This isn’t true. For example, the doubling map is chaotic. Despite that, many points can have their orbits calculated without such work. For example, if the value of the starting point is rational, we can without much effort always give an exact value for any number of iterations with less computational effort than that in simply iterating the function. There are some complicating factors to this sort of analysis; in particular, if the universe is essentially discrete, then what we mean when we talk about chaos becomes subtle and if the universe isn’t discrete then what we mean when we discuss computational complexity becomes subtle (we need to use Blum-Shub-Smale machines or something similar rather than Turing machines). But the upshot is that chaotic behavior is not equivalent to being computationally complex.
There have been some papers trying to map out connections between the two (and I don’t know that literature at all), and superficially there are some similarities between the two, but if someone could show deep, broad connections of the sort you seem to think are already known that would be the sort of thing that could lead to a Turing Award or a Fields Medal.
Sure, but think in terms of observers. From the perspective of the universe that the simulators end up keeping there’s only one universe, the one where the simulators got what they wanted.
But at any given time you may be in a branch that’s going to be deleted or rewound because it doesn’t lead to the results that the simulators want. The vast bulk of our experience would be in lines that the simulators don’t want. So not only do we have no reason to suppose it’s happening, it wouldn’t be particularly useful to us if we suppose that the branch the simulators want is better for us than the ones they don’t.
I concede that my understanding of the requirements to project a simulation of our universe may have been mistaken, but the conclusions jacob cannell drew are still extraneous additions to the simulation argument, not necessary consequences of it.
Omniscience and omnipotence have already been discussed at length—the SA does not imply perfection in either category on the part of the creator, but this is a meaningless distinction. For all intents and purposes the creator would have the potential for absolute control over the simulation. It is of course much more of an open question whether the creator would ever intervene in any fashion.
(I discussed that in length elsewhere, but basically I think future posthumans would be less likely to intervene in our history while aliens would be more likely)
Also, my points about the connectedness between morality and utility functions of creator and creation still stand. The SA requires that the creator made the simulation for a purpose in its universe, and the utility function or morality of the creator evolved from something like our descendants.
But at any given time you may be in a branch that’s going to be deleted or rewound because it doesn’t lead to the results that the simulators want. The vast bulk of our experience would be in lines that the simulators don’t want.
Not necessarily. It would depend on how narrow they wanted things and how often they intervened in this fashion. If such interventions are not very common then the majority of experience will be in universes which are very close to that desired by the simulators.
but the conclusions jacob cannell drew are still extraneous additions to the simulation argument, not necessary consequences of it.
If one can pause a simulation and run it backwards or make multiple copies of a simulation, then from our perspective for many purposes the simulators will be omniscient. There might be still some limits in that regard (for example if they are bound to only do computable operations then they will be limited in what math they can do.)
Also, if a simulator wants a specific outcome, and there’s some random aspect in the simulation (such as from quantum mechanical effects) they could run the simulation multiple times until they got a result they wanted.
This isn’t quite true. As I understand it, there are very few results asserting minimal computational complexity of chaotic systems. The primary problem with chaotic systems is that predicting their behavior becomes very difficult if one has anything less than perfect accuracy because very similar initial conditions s can diverge in long-term behavior. That doesn’t say much about how hard things are to compute if you have perfect information.
But running the simulation is running our reality. If they run multiple simulations with slight alterations to get the outcome they want, that’s many realities that actually occur which don’t achieve the results they want for every one that does. Likewise, rewinding the simulation may allow them to achieve the results they want, but it doesn’t prevent the events they don’t want from happening to us. Besides, there’s no evidence that our universe is being guided according to any agent’s utility function, and if it is, it’s certainly not much like ours.
Chaotic systems are hard to project because small differences between the information in the system and the information in the model propagate to create large differences between the system and the model over time. To make the model perfectly accurate, it must follow all the same rules and contain all the same information. Projecting the simulation with perfect accuracy is equivalent to running the simulation.
The SA mechanism places many constraints on the creator. They exist in a universe like ours, they are similar to our future descendants, they created us for a reason, and their utility function, morality, what have you all evolved from a universe like ours.
Monte carlo simulation.
You don’t run one simulation, you run many. There is no one single correct answer that the simulation is attempting to compute. It is a landscape, a multiverse, from which you sample.
Sure, but think in terms of observers. From the perspective of the universe that the simulators end up keeping there’s only one universe, the one where the simulators got what they wanted.
Yes, you’ve made that point before. I don’t disagree with it. I’m not sure why you are bringing it up again.
It must contain the same information. It doesn’t need to contain the same rules.
This isn’t true. For example, the doubling map is chaotic. Despite that, many points can have their orbits calculated without such work. For example, if the value of the starting point is rational, we can without much effort always give an exact value for any number of iterations with less computational effort than that in simply iterating the function. There are some complicating factors to this sort of analysis; in particular, if the universe is essentially discrete, then what we mean when we talk about chaos becomes subtle and if the universe isn’t discrete then what we mean when we discuss computational complexity becomes subtle (we need to use Blum-Shub-Smale machines or something similar rather than Turing machines). But the upshot is that chaotic behavior is not equivalent to being computationally complex.
There have been some papers trying to map out connections between the two (and I don’t know that literature at all), and superficially there are some similarities between the two, but if someone could show deep, broad connections of the sort you seem to think are already known that would be the sort of thing that could lead to a Turing Award or a Fields Medal.
But at any given time you may be in a branch that’s going to be deleted or rewound because it doesn’t lead to the results that the simulators want. The vast bulk of our experience would be in lines that the simulators don’t want. So not only do we have no reason to suppose it’s happening, it wouldn’t be particularly useful to us if we suppose that the branch the simulators want is better for us than the ones they don’t.
I concede that my understanding of the requirements to project a simulation of our universe may have been mistaken, but the conclusions jacob cannell drew are still extraneous additions to the simulation argument, not necessary consequences of it.
Which are the ‘extraneous additions’?
Omniscience and omnipotence have already been discussed at length—the SA does not imply perfection in either category on the part of the creator, but this is a meaningless distinction. For all intents and purposes the creator would have the potential for absolute control over the simulation. It is of course much more of an open question whether the creator would ever intervene in any fashion.
(I discussed that in length elsewhere, but basically I think future posthumans would be less likely to intervene in our history while aliens would be more likely)
Also, my points about the connectedness between morality and utility functions of creator and creation still stand. The SA requires that the creator made the simulation for a purpose in its universe, and the utility function or morality of the creator evolved from something like our descendants.
Not necessarily. It would depend on how narrow they wanted things and how often they intervened in this fashion. If such interventions are not very common then the majority of experience will be in universes which are very close to that desired by the simulators.
No disagreement there.