The SA posits an external universe above ours, which although operating according to physics likely identical or very similar to ours, is not at all constrained by our physics. Thus the creator in the SA is quite possibly supernaturally omniscient and omnipotent.
Also, whatever utility function/morality we have in our universe, the SA indicates and requires it was purposefully created to some end in the parent universe and may be eventually evaluated according to some external utility function.
EDIT: Removed bit about ‘new theism’ - it has the wrong connotations. This set of conjectures is very similar, but distinct from, traditional theism. Perhaps it needs a new word, but it is a valid domain of knowledge.
The simulators, should they exist, do not appear to reward belief or worship. We have no reason to regard them as moral authorities, and they do not intervene, with or without appeals. Plus, while the simulators can presumably access all of the data in the simulation, that doesn’t mean that they would be able to keep track of it, or predict the results should they interfere in a chaotic system, so there’s no reason to suppose that they’re functionally omniscient. Unless the superordinate reality is different in some very fundamental ways, it’s impossible to predict what happens in chaotic systems in our universe in advance with precision, without actually running the simulation,
It does not in any way follow from the simulation argument that our morality was purposefully created by the simulators; by all appearances the simulation, should it happen to be one, is untampered with, and our utility functions evolved.
You can build up a religious edifice around simulationism, but like supernatural theism, it requires the acceptance of completely unevidenced assertions.
If one can pause a simulation and run it backwards or make multiple copies of a simulation, then from our perspective for many purposes the simulators will be omniscient. There might be still some limits in that regard (for example if they are bound to only do computable operations then they will be limited in what math they can do.)
Also, if a simulator wants a specific outcome, and there’s some random aspect in the simulation (such as from quantum mechanical effects) they could run the simulation multiple times until they got a result they wanted.
Unless the superordinate reality is different in some very fundamental ways, it’s impossible to predict what happens in chaotic systems in our universe in advance with precision, without actually running the simulation,
This isn’t quite true. As I understand it, there are very few results asserting minimal computational complexity of chaotic systems. The primary problem with chaotic systems is that predicting their behavior becomes very difficult if one has anything less than perfect accuracy because very similar initial conditions s can diverge in long-term behavior. That doesn’t say much about how hard things are to compute if you have perfect information.
But running the simulation is running our reality. If they run multiple simulations with slight alterations to get the outcome they want, that’s many realities that actually occur which don’t achieve the results they want for every one that does. Likewise, rewinding the simulation may allow them to achieve the results they want, but it doesn’t prevent the events they don’t want from happening to us. Besides, there’s no evidence that our universe is being guided according to any agent’s utility function, and if it is, it’s certainly not much like ours.
This isn’t quite true. As I understand it, there are very few results asserting minimal computational complexity of chaotic systems. The primary problem with chaotic systems is that predicting their behavior becomes very difficult if one has anything less than perfect accuracy because very similar initial conditions s can diverge in long-term behavior. That doesn’t say much about how hard things are to compute if you have perfect information.
Chaotic systems are hard to project because small differences between the information in the system and the information in the model propagate to create large differences between the system and the model over time. To make the model perfectly accurate, it must follow all the same rules and contain all the same information. Projecting the simulation with perfect accuracy is equivalent to running the simulation.
Besides, there’s no evidence that our universe is being guided according to any agent’s utility function, and if it is, it’s certainly not much like ours.
The SA mechanism places many constraints on the creator. They exist in a universe like ours, they are similar to our future descendants, they created us for a reason, and their utility function, morality, what have you all evolved from a universe like ours.
You don’t run one simulation, you run many. There is no one single correct answer that the simulation is attempting to compute. It is a landscape, a multiverse, from which you sample.
But running the simulation is running our reality. If they run multiple simulations with slight alterations to get the outcome they want, that’s many realities that actually occur which don’t achieve the results they want for every one that does.
Sure, but think in terms of observers. From the perspective of the universe that the simulators end up keeping there’s only one universe, the one where the simulators got what they wanted.
Besides, there’s no evidence that our universe is being guided according to any agent’s utility function, and if it is, it’s certainly not much like ours.
Yes, you’ve made that point before. I don’t disagree with it. I’m not sure why you are bringing it up again.
Chaotic systems are hard to project because small differences between the information in the system and the information in the model propagate to create large differences between the system and the model over time. To make the model perfectly accurate, it must follow all the same rules and contain all the same information.
It must contain the same information. It doesn’t need to contain the same rules.
Projecting the simulation with perfect accuracy is equivalent to running the simulation.
This isn’t true. For example, the doubling map is chaotic. Despite that, many points can have their orbits calculated without such work. For example, if the value of the starting point is rational, we can without much effort always give an exact value for any number of iterations with less computational effort than that in simply iterating the function. There are some complicating factors to this sort of analysis; in particular, if the universe is essentially discrete, then what we mean when we talk about chaos becomes subtle and if the universe isn’t discrete then what we mean when we discuss computational complexity becomes subtle (we need to use Blum-Shub-Smale machines or something similar rather than Turing machines). But the upshot is that chaotic behavior is not equivalent to being computationally complex.
There have been some papers trying to map out connections between the two (and I don’t know that literature at all), and superficially there are some similarities between the two, but if someone could show deep, broad connections of the sort you seem to think are already known that would be the sort of thing that could lead to a Turing Award or a Fields Medal.
Sure, but think in terms of observers. From the perspective of the universe that the simulators end up keeping there’s only one universe, the one where the simulators got what they wanted.
But at any given time you may be in a branch that’s going to be deleted or rewound because it doesn’t lead to the results that the simulators want. The vast bulk of our experience would be in lines that the simulators don’t want. So not only do we have no reason to suppose it’s happening, it wouldn’t be particularly useful to us if we suppose that the branch the simulators want is better for us than the ones they don’t.
I concede that my understanding of the requirements to project a simulation of our universe may have been mistaken, but the conclusions jacob cannell drew are still extraneous additions to the simulation argument, not necessary consequences of it.
Omniscience and omnipotence have already been discussed at length—the SA does not imply perfection in either category on the part of the creator, but this is a meaningless distinction. For all intents and purposes the creator would have the potential for absolute control over the simulation. It is of course much more of an open question whether the creator would ever intervene in any fashion.
(I discussed that in length elsewhere, but basically I think future posthumans would be less likely to intervene in our history while aliens would be more likely)
Also, my points about the connectedness between morality and utility functions of creator and creation still stand. The SA requires that the creator made the simulation for a purpose in its universe, and the utility function or morality of the creator evolved from something like our descendants.
But at any given time you may be in a branch that’s going to be deleted or rewound because it doesn’t lead to the results that the simulators want. The vast bulk of our experience would be in lines that the simulators don’t want.
Not necessarily. It would depend on how narrow they wanted things and how often they intervened in this fashion. If such interventions are not very common then the majority of experience will be in universes which are very close to that desired by the simulators.
but the conclusions jacob cannell drew are still extraneous additions to the simulation argument, not necessary consequences of it.
it’s impossible to predict what happens in chaotic systems in our universe in advance with precision, without actually running the simulation,
Yes, this precisely is the primary utility for the creator.
But humans do this too, for intelligence is all about simulation. We created computers to further amplify our simulation/intelligence.
I agree mostly with what you’re saying, but let me clarify. I am fully aware of the practical limitations, by functionally omniscient, I meant they can analyze and observe any aspect of the simulation from a variety of perspectives, using senses far beyond what we can imagine, and the flow of time itself need not be linear or continuous. This doesn’t mean they are concerned with every little detail all of the time, but I find it difficult to believe that anything important, from their perspective, would be missed.
And yes of course our morality appears to have evolved through natural genetic/memetic evolution, but the SA chains that morality with the creator’s morality in several fashions. First, as we are close to the historical ancestors of the creator, our morality is also their historical morality. And second, to the extent we can predict and model the future evolution of our own descendant’s morality, we are predicting the creator’s morality. You know: “As man is, god was, as god is, man shall become”
I’m not sure about your ‘religious edifice’, and what assertions are unevidenced.
And yes of course our morality appears to have evolved through natural genetic/memetic evolution, but the SA chains that morality with the creator’s morality in several fashions. First, as we are close to the historical ancestors of the creator, our morality is also their historical morality. And second, to the extent we can predict and model the future evolution of our own descendant’s morality, we are predicting the creator’s morality.
This only makes sense in the very narrow version of the simulation hypothesis under which the simulators are in some way descended from humans or products of human intervention. That’s not necessarily the case.
That’s true, but I”m not sure if the “very narrow” qualifier is accurate. The creator candidates are: future humans, future aliens, ancient aliens. I think utility functions for any simulator civilizations will be structurally similar as they stem from universal physics, but perhaps that of future humans will be the most connected to our current.
No. You are assuming that the simulators are evolved entities. They could also be AIs for example. Moreover, there’s no very good reason to assume that the moral systems would be similar. For example, consider if we had the ability to make very rough simulations and things about as intelligent as insects evolved in the simulation. Would we care? No. Nor, would our moral sense in any way match theirs. So now if one has for example some thing that is vastly smarter than humans and lives in some strange 5 dimensional space. It is wondering if star formation can occur in 3-dimensions and if so how it behaved. The fact that there’s something resembling fairly stupid life that has shown up on some parts of its system isn’t going to matter to it, unless some of it does something that interferes with what the entity is trying to learn (say the humans decide to start making Dyson spheres or engage in star lifting).
Incidentally, even this one could pattern match to some forms of theism (For God’s ways are not our ways...), which leads to a more general problem with this discussion. Apologetics and theology of most major religions has managed to say so many contradictory things (In this case the dueling claims are that we can’t comprehend God’s mysterious, ineffable plans, and that God has a moral system that matches ours.) So it isn’t hard to find something that pattern matches with any given claim.
The primary strong reason to not care about simulationism has nothing to do with whether or not it is has a resemblance to theism, but for the simple reason that it doesn’t predict anything useful. There’s no evidence of intervention, and we have no idea what probabilities to assign to different types of simulators. So the hypothesis can’t pay rent.
No. You are assuming that the simulators are evolved entities. They could also be AIs for example
AI’s don’t just magically pop out of nothing. Like anything else under the sun, they systemically evolve from existing patterns. They will evolve from our existant technosphere/noosphere (the realm of competing technologies and ideas).
I would be surprised if future posthumans, or equivalent Singularity-tech aliens, would have moral systems just like ours.
On the other hand, moral or goal systems are not random, and are subject to evolutionary pressure just as much as anything else. So as we understand our goal systems or morality and develop more of a science of it, we can understand it in objective terms, how it is likely to evolve, and learn the shape of likely future goal systems of superintelligences in this universe.
Your insect example is not quite accurate. There are people right now who are simulating the evolution of early insects. Yes the number of researchers is small and they are currently just doing very rough weak simulation using their biological brains, but nonetheless. Also, our current time period does not appear to be a random sample in terms of historical importance. In fact, we happen to live in a moment which is probably of extremely high future historical importance. This is loosely predicted by the SA.
We do have a methodology of assigning probabilities to different types of simulators. First you start with a model of our universe and fill in the important gaps concerning the unobservables—both in the present in terms of potential alien civilizations, and in the future in terms of the shape of our future. Of this set of Singularity level civilizations, we can expect them to run simulations of our current slice of space-time in proportion to it’s utility vs the expected utility of simulating other slices of space-time.
They could also run and are likely to run simulations of space-time pockets in other universes unlike ours, fictional universes, etc. However a general rule applies—the more dissimilar the simulated universe is to the parent universe, the vaster the space of configurations becomes and the less utility the simulation has. So we can expect that the parent universe is roughly similar to ours.
The question of evidence for intervention depends on the quality of the evidence itself and the prior. The SA helps us to understand the prior.
Before the SA there was no mechanism for a creator, and so the prior for intervention was zero regardless of the evidence. That is no longer the case. (Nor is it yet a case for intervention)
AI’s don’t just magically pop out of nothing. Like anything else under the sun, they systemically evolve from existing patterns. They will evolve from our existant technosphere/noosphere (the realm of competing technologies and ideas).
Again, you are assuming that the entities arise from human intervention. The Simulation Hypothesis does not require that.
Your insect example is not quite accurate. There are people right now who are simulating the evolution of early insects.
How is it not accurate? I fail to see how the presence of such research makes my point invalid.
However a general rule applies—the more dissimilar the simulated universe is to the parent universe, the vaster the space of configurations becomes and the less utility the simulation has. So we can expect that the parent universe is roughly similar to ours.
This does not follow. Similarity of the simulation to the ground universe is not necessarily connected in any useful way to utility. For example, universes that work off of cellar automata would be really interesting despite the fact that our universe doesn’t seem to operate in that fashion.
Before the SA there was no mechanism for a creator, and so the prior for intervention was zero regardless of the evidence. That is no longer the case. (Nor is it yet a case for intervention)
This confuses me. Generally, the problem with assigning a prior of zero to a claim is just what you’ve said here, that it is stuck at zero no matter how much you update with evidence. This is bad. But, you then seem to be asserting that an update did occur due to the simulation hypothesis. This leaves me confused.
No. You are assuming that the simulators are evolved entities. They could also be AIs for example
They will evolve from our existant technosphere/noosphere (the realm of competing technologies and ideas).
Again, you are assuming that the [simulator] entities arise from human intervention. The Simulation Hypothesis does not require that.
Sure, but the SH requires some connection between the simulated universe and the simulator universe.
If you think of the entire ensemble of possible universes as a landscape, it is true that any point-universe in that landscape can be simulated by any other (of great enough complexity). However, that doesn’t mean the probability distribution is flat across the landscape.
The farther away the simulated universe is from the parent universe in this landscape, the less correlated, relevant, and useful it’s simulation is to the parent universe. In addition, the farther away you go in this landscape from the parent universe, the set of of possible universes one could simulate expands … at least exponentially.
The consequence of all this is that the probability distribution across potential universes that could be simulating us is tightly clustered around universes similar to ours—different sample points in the multiverse described by our same physics.
This does not follow. Similarity of the simulation to the ground universe is not necessarily connected in any useful way to utility.
Of course it is. We simulate systems to predict their future states and make the most profitable decisions. Simulation is integral to intelligence.
This has been mathematically formalized in AI theory and AIXI:
Intelligence is simulation-driven search through the landscape of potential realizable futures for the path that maximizes future utility.
This does not follow. Similarity of the simulation to the ground universe is not necessarily connected in any useful way to utility.
Of course it is. We simulate systems to predict their future states and make the most profitable decisions. Simulation is integral to intelligence.
No. See my earlier example with cellular automata. Our universe isn’t based on cellular automata but we’d still be interested in running simulations of large universes with such a base just because they are interesting. The fact that our universe has very little similarity to those universes doesn’t reduce my utility in running such simulations.
That said, I agree that there should be a rough correlation where we’d expect universes to be more likely to simulate universes similar to them. I don’t think this necessarily has anything to do with utility though, more that entities are more likely to monkey around with the laws of their own universes and see what happens. Due to something like an anchoring effect, entities should be more likely to imagine universes that are in some way closer to their own universe compared to the massive landscape of possible universes.
But, that similarity could be so weak as to have little or no connection to whether the simulators care about the simulated universe.
How so?
The SA posits an external universe above ours, which although operating according to physics likely identical or very similar to ours, is not at all constrained by our physics. Thus the creator in the SA is quite possibly supernaturally omniscient and omnipotent.
Also, whatever utility function/morality we have in our universe, the SA indicates and requires it was purposefully created to some end in the parent universe and may be eventually evaluated according to some external utility function.
EDIT: Removed bit about ‘new theism’ - it has the wrong connotations. This set of conjectures is very similar, but distinct from, traditional theism. Perhaps it needs a new word, but it is a valid domain of knowledge.
The simulators, should they exist, do not appear to reward belief or worship. We have no reason to regard them as moral authorities, and they do not intervene, with or without appeals. Plus, while the simulators can presumably access all of the data in the simulation, that doesn’t mean that they would be able to keep track of it, or predict the results should they interfere in a chaotic system, so there’s no reason to suppose that they’re functionally omniscient. Unless the superordinate reality is different in some very fundamental ways, it’s impossible to predict what happens in chaotic systems in our universe in advance with precision, without actually running the simulation,
It does not in any way follow from the simulation argument that our morality was purposefully created by the simulators; by all appearances the simulation, should it happen to be one, is untampered with, and our utility functions evolved.
You can build up a religious edifice around simulationism, but like supernatural theism, it requires the acceptance of completely unevidenced assertions.
If one can pause a simulation and run it backwards or make multiple copies of a simulation, then from our perspective for many purposes the simulators will be omniscient. There might be still some limits in that regard (for example if they are bound to only do computable operations then they will be limited in what math they can do.)
Also, if a simulator wants a specific outcome, and there’s some random aspect in the simulation (such as from quantum mechanical effects) they could run the simulation multiple times until they got a result they wanted.
This isn’t quite true. As I understand it, there are very few results asserting minimal computational complexity of chaotic systems. The primary problem with chaotic systems is that predicting their behavior becomes very difficult if one has anything less than perfect accuracy because very similar initial conditions s can diverge in long-term behavior. That doesn’t say much about how hard things are to compute if you have perfect information.
But running the simulation is running our reality. If they run multiple simulations with slight alterations to get the outcome they want, that’s many realities that actually occur which don’t achieve the results they want for every one that does. Likewise, rewinding the simulation may allow them to achieve the results they want, but it doesn’t prevent the events they don’t want from happening to us. Besides, there’s no evidence that our universe is being guided according to any agent’s utility function, and if it is, it’s certainly not much like ours.
Chaotic systems are hard to project because small differences between the information in the system and the information in the model propagate to create large differences between the system and the model over time. To make the model perfectly accurate, it must follow all the same rules and contain all the same information. Projecting the simulation with perfect accuracy is equivalent to running the simulation.
The SA mechanism places many constraints on the creator. They exist in a universe like ours, they are similar to our future descendants, they created us for a reason, and their utility function, morality, what have you all evolved from a universe like ours.
Monte carlo simulation.
You don’t run one simulation, you run many. There is no one single correct answer that the simulation is attempting to compute. It is a landscape, a multiverse, from which you sample.
Sure, but think in terms of observers. From the perspective of the universe that the simulators end up keeping there’s only one universe, the one where the simulators got what they wanted.
Yes, you’ve made that point before. I don’t disagree with it. I’m not sure why you are bringing it up again.
It must contain the same information. It doesn’t need to contain the same rules.
This isn’t true. For example, the doubling map is chaotic. Despite that, many points can have their orbits calculated without such work. For example, if the value of the starting point is rational, we can without much effort always give an exact value for any number of iterations with less computational effort than that in simply iterating the function. There are some complicating factors to this sort of analysis; in particular, if the universe is essentially discrete, then what we mean when we talk about chaos becomes subtle and if the universe isn’t discrete then what we mean when we discuss computational complexity becomes subtle (we need to use Blum-Shub-Smale machines or something similar rather than Turing machines). But the upshot is that chaotic behavior is not equivalent to being computationally complex.
There have been some papers trying to map out connections between the two (and I don’t know that literature at all), and superficially there are some similarities between the two, but if someone could show deep, broad connections of the sort you seem to think are already known that would be the sort of thing that could lead to a Turing Award or a Fields Medal.
But at any given time you may be in a branch that’s going to be deleted or rewound because it doesn’t lead to the results that the simulators want. The vast bulk of our experience would be in lines that the simulators don’t want. So not only do we have no reason to suppose it’s happening, it wouldn’t be particularly useful to us if we suppose that the branch the simulators want is better for us than the ones they don’t.
I concede that my understanding of the requirements to project a simulation of our universe may have been mistaken, but the conclusions jacob cannell drew are still extraneous additions to the simulation argument, not necessary consequences of it.
Which are the ‘extraneous additions’?
Omniscience and omnipotence have already been discussed at length—the SA does not imply perfection in either category on the part of the creator, but this is a meaningless distinction. For all intents and purposes the creator would have the potential for absolute control over the simulation. It is of course much more of an open question whether the creator would ever intervene in any fashion.
(I discussed that in length elsewhere, but basically I think future posthumans would be less likely to intervene in our history while aliens would be more likely)
Also, my points about the connectedness between morality and utility functions of creator and creation still stand. The SA requires that the creator made the simulation for a purpose in its universe, and the utility function or morality of the creator evolved from something like our descendants.
Not necessarily. It would depend on how narrow they wanted things and how often they intervened in this fashion. If such interventions are not very common then the majority of experience will be in universes which are very close to that desired by the simulators.
No disagreement there.
Yes, this precisely is the primary utility for the creator.
But humans do this too, for intelligence is all about simulation. We created computers to further amplify our simulation/intelligence.
I agree mostly with what you’re saying, but let me clarify. I am fully aware of the practical limitations, by functionally omniscient, I meant they can analyze and observe any aspect of the simulation from a variety of perspectives, using senses far beyond what we can imagine, and the flow of time itself need not be linear or continuous. This doesn’t mean they are concerned with every little detail all of the time, but I find it difficult to believe that anything important, from their perspective, would be missed.
And yes of course our morality appears to have evolved through natural genetic/memetic evolution, but the SA chains that morality with the creator’s morality in several fashions. First, as we are close to the historical ancestors of the creator, our morality is also their historical morality. And second, to the extent we can predict and model the future evolution of our own descendant’s morality, we are predicting the creator’s morality. You know: “As man is, god was, as god is, man shall become”
I’m not sure about your ‘religious edifice’, and what assertions are unevidenced.
This only makes sense in the very narrow version of the simulation hypothesis under which the simulators are in some way descended from humans or products of human intervention. That’s not necessarily the case.
That’s true, but I”m not sure if the “very narrow” qualifier is accurate. The creator candidates are: future humans, future aliens, ancient aliens. I think utility functions for any simulator civilizations will be structurally similar as they stem from universal physics, but perhaps that of future humans will be the most connected to our current.
No. You are assuming that the simulators are evolved entities. They could also be AIs for example. Moreover, there’s no very good reason to assume that the moral systems would be similar. For example, consider if we had the ability to make very rough simulations and things about as intelligent as insects evolved in the simulation. Would we care? No. Nor, would our moral sense in any way match theirs. So now if one has for example some thing that is vastly smarter than humans and lives in some strange 5 dimensional space. It is wondering if star formation can occur in 3-dimensions and if so how it behaved. The fact that there’s something resembling fairly stupid life that has shown up on some parts of its system isn’t going to matter to it, unless some of it does something that interferes with what the entity is trying to learn (say the humans decide to start making Dyson spheres or engage in star lifting).
Incidentally, even this one could pattern match to some forms of theism (For God’s ways are not our ways...), which leads to a more general problem with this discussion. Apologetics and theology of most major religions has managed to say so many contradictory things (In this case the dueling claims are that we can’t comprehend God’s mysterious, ineffable plans, and that God has a moral system that matches ours.) So it isn’t hard to find something that pattern matches with any given claim.
The primary strong reason to not care about simulationism has nothing to do with whether or not it is has a resemblance to theism, but for the simple reason that it doesn’t predict anything useful. There’s no evidence of intervention, and we have no idea what probabilities to assign to different types of simulators. So the hypothesis can’t pay rent.
AI’s don’t just magically pop out of nothing. Like anything else under the sun, they systemically evolve from existing patterns. They will evolve from our existant technosphere/noosphere (the realm of competing technologies and ideas).
I would be surprised if future posthumans, or equivalent Singularity-tech aliens, would have moral systems just like ours.
On the other hand, moral or goal systems are not random, and are subject to evolutionary pressure just as much as anything else. So as we understand our goal systems or morality and develop more of a science of it, we can understand it in objective terms, how it is likely to evolve, and learn the shape of likely future goal systems of superintelligences in this universe.
Your insect example is not quite accurate. There are people right now who are simulating the evolution of early insects. Yes the number of researchers is small and they are currently just doing very rough weak simulation using their biological brains, but nonetheless. Also, our current time period does not appear to be a random sample in terms of historical importance. In fact, we happen to live in a moment which is probably of extremely high future historical importance. This is loosely predicted by the SA.
We do have a methodology of assigning probabilities to different types of simulators. First you start with a model of our universe and fill in the important gaps concerning the unobservables—both in the present in terms of potential alien civilizations, and in the future in terms of the shape of our future. Of this set of Singularity level civilizations, we can expect them to run simulations of our current slice of space-time in proportion to it’s utility vs the expected utility of simulating other slices of space-time.
They could also run and are likely to run simulations of space-time pockets in other universes unlike ours, fictional universes, etc. However a general rule applies—the more dissimilar the simulated universe is to the parent universe, the vaster the space of configurations becomes and the less utility the simulation has. So we can expect that the parent universe is roughly similar to ours.
The question of evidence for intervention depends on the quality of the evidence itself and the prior. The SA helps us to understand the prior.
Before the SA there was no mechanism for a creator, and so the prior for intervention was zero regardless of the evidence. That is no longer the case. (Nor is it yet a case for intervention)
Again, you are assuming that the entities arise from human intervention. The Simulation Hypothesis does not require that.
How is it not accurate? I fail to see how the presence of such research makes my point invalid.
This does not follow. Similarity of the simulation to the ground universe is not necessarily connected in any useful way to utility. For example, universes that work off of cellar automata would be really interesting despite the fact that our universe doesn’t seem to operate in that fashion.
This confuses me. Generally, the problem with assigning a prior of zero to a claim is just what you’ve said here, that it is stuck at zero no matter how much you update with evidence. This is bad. But, you then seem to be asserting that an update did occur due to the simulation hypothesis. This leaves me confused.
Sure, but the SH requires some connection between the simulated universe and the simulator universe.
If you think of the entire ensemble of possible universes as a landscape, it is true that any point-universe in that landscape can be simulated by any other (of great enough complexity). However, that doesn’t mean the probability distribution is flat across the landscape.
The farther away the simulated universe is from the parent universe in this landscape, the less correlated, relevant, and useful it’s simulation is to the parent universe. In addition, the farther away you go in this landscape from the parent universe, the set of of possible universes one could simulate expands … at least exponentially.
The consequence of all this is that the probability distribution across potential universes that could be simulating us is tightly clustered around universes similar to ours—different sample points in the multiverse described by our same physics.
Of course it is. We simulate systems to predict their future states and make the most profitable decisions. Simulation is integral to intelligence.
This has been mathematically formalized in AI theory and AIXI:
Intelligence is simulation-driven search through the landscape of potential realizable futures for the path that maximizes future utility.
No. See my earlier example with cellular automata. Our universe isn’t based on cellular automata but we’d still be interested in running simulations of large universes with such a base just because they are interesting. The fact that our universe has very little similarity to those universes doesn’t reduce my utility in running such simulations.
That said, I agree that there should be a rough correlation where we’d expect universes to be more likely to simulate universes similar to them. I don’t think this necessarily has anything to do with utility though, more that entities are more likely to monkey around with the laws of their own universes and see what happens. Due to something like an anchoring effect, entities should be more likely to imagine universes that are in some way closer to their own universe compared to the massive landscape of possible universes.
But, that similarity could be so weak as to have little or no connection to whether the simulators care about the simulated universe.