it’s impossible to predict what happens in chaotic systems in our universe in advance with precision, without actually running the simulation,
Yes, this precisely is the primary utility for the creator.
But humans do this too, for intelligence is all about simulation. We created computers to further amplify our simulation/intelligence.
I agree mostly with what you’re saying, but let me clarify. I am fully aware of the practical limitations, by functionally omniscient, I meant they can analyze and observe any aspect of the simulation from a variety of perspectives, using senses far beyond what we can imagine, and the flow of time itself need not be linear or continuous. This doesn’t mean they are concerned with every little detail all of the time, but I find it difficult to believe that anything important, from their perspective, would be missed.
And yes of course our morality appears to have evolved through natural genetic/memetic evolution, but the SA chains that morality with the creator’s morality in several fashions. First, as we are close to the historical ancestors of the creator, our morality is also their historical morality. And second, to the extent we can predict and model the future evolution of our own descendant’s morality, we are predicting the creator’s morality. You know: “As man is, god was, as god is, man shall become”
I’m not sure about your ‘religious edifice’, and what assertions are unevidenced.
And yes of course our morality appears to have evolved through natural genetic/memetic evolution, but the SA chains that morality with the creator’s morality in several fashions. First, as we are close to the historical ancestors of the creator, our morality is also their historical morality. And second, to the extent we can predict and model the future evolution of our own descendant’s morality, we are predicting the creator’s morality.
This only makes sense in the very narrow version of the simulation hypothesis under which the simulators are in some way descended from humans or products of human intervention. That’s not necessarily the case.
That’s true, but I”m not sure if the “very narrow” qualifier is accurate. The creator candidates are: future humans, future aliens, ancient aliens. I think utility functions for any simulator civilizations will be structurally similar as they stem from universal physics, but perhaps that of future humans will be the most connected to our current.
No. You are assuming that the simulators are evolved entities. They could also be AIs for example. Moreover, there’s no very good reason to assume that the moral systems would be similar. For example, consider if we had the ability to make very rough simulations and things about as intelligent as insects evolved in the simulation. Would we care? No. Nor, would our moral sense in any way match theirs. So now if one has for example some thing that is vastly smarter than humans and lives in some strange 5 dimensional space. It is wondering if star formation can occur in 3-dimensions and if so how it behaved. The fact that there’s something resembling fairly stupid life that has shown up on some parts of its system isn’t going to matter to it, unless some of it does something that interferes with what the entity is trying to learn (say the humans decide to start making Dyson spheres or engage in star lifting).
Incidentally, even this one could pattern match to some forms of theism (For God’s ways are not our ways...), which leads to a more general problem with this discussion. Apologetics and theology of most major religions has managed to say so many contradictory things (In this case the dueling claims are that we can’t comprehend God’s mysterious, ineffable plans, and that God has a moral system that matches ours.) So it isn’t hard to find something that pattern matches with any given claim.
The primary strong reason to not care about simulationism has nothing to do with whether or not it is has a resemblance to theism, but for the simple reason that it doesn’t predict anything useful. There’s no evidence of intervention, and we have no idea what probabilities to assign to different types of simulators. So the hypothesis can’t pay rent.
No. You are assuming that the simulators are evolved entities. They could also be AIs for example
AI’s don’t just magically pop out of nothing. Like anything else under the sun, they systemically evolve from existing patterns. They will evolve from our existant technosphere/noosphere (the realm of competing technologies and ideas).
I would be surprised if future posthumans, or equivalent Singularity-tech aliens, would have moral systems just like ours.
On the other hand, moral or goal systems are not random, and are subject to evolutionary pressure just as much as anything else. So as we understand our goal systems or morality and develop more of a science of it, we can understand it in objective terms, how it is likely to evolve, and learn the shape of likely future goal systems of superintelligences in this universe.
Your insect example is not quite accurate. There are people right now who are simulating the evolution of early insects. Yes the number of researchers is small and they are currently just doing very rough weak simulation using their biological brains, but nonetheless. Also, our current time period does not appear to be a random sample in terms of historical importance. In fact, we happen to live in a moment which is probably of extremely high future historical importance. This is loosely predicted by the SA.
We do have a methodology of assigning probabilities to different types of simulators. First you start with a model of our universe and fill in the important gaps concerning the unobservables—both in the present in terms of potential alien civilizations, and in the future in terms of the shape of our future. Of this set of Singularity level civilizations, we can expect them to run simulations of our current slice of space-time in proportion to it’s utility vs the expected utility of simulating other slices of space-time.
They could also run and are likely to run simulations of space-time pockets in other universes unlike ours, fictional universes, etc. However a general rule applies—the more dissimilar the simulated universe is to the parent universe, the vaster the space of configurations becomes and the less utility the simulation has. So we can expect that the parent universe is roughly similar to ours.
The question of evidence for intervention depends on the quality of the evidence itself and the prior. The SA helps us to understand the prior.
Before the SA there was no mechanism for a creator, and so the prior for intervention was zero regardless of the evidence. That is no longer the case. (Nor is it yet a case for intervention)
AI’s don’t just magically pop out of nothing. Like anything else under the sun, they systemically evolve from existing patterns. They will evolve from our existant technosphere/noosphere (the realm of competing technologies and ideas).
Again, you are assuming that the entities arise from human intervention. The Simulation Hypothesis does not require that.
Your insect example is not quite accurate. There are people right now who are simulating the evolution of early insects.
How is it not accurate? I fail to see how the presence of such research makes my point invalid.
However a general rule applies—the more dissimilar the simulated universe is to the parent universe, the vaster the space of configurations becomes and the less utility the simulation has. So we can expect that the parent universe is roughly similar to ours.
This does not follow. Similarity of the simulation to the ground universe is not necessarily connected in any useful way to utility. For example, universes that work off of cellar automata would be really interesting despite the fact that our universe doesn’t seem to operate in that fashion.
Before the SA there was no mechanism for a creator, and so the prior for intervention was zero regardless of the evidence. That is no longer the case. (Nor is it yet a case for intervention)
This confuses me. Generally, the problem with assigning a prior of zero to a claim is just what you’ve said here, that it is stuck at zero no matter how much you update with evidence. This is bad. But, you then seem to be asserting that an update did occur due to the simulation hypothesis. This leaves me confused.
No. You are assuming that the simulators are evolved entities. They could also be AIs for example
They will evolve from our existant technosphere/noosphere (the realm of competing technologies and ideas).
Again, you are assuming that the [simulator] entities arise from human intervention. The Simulation Hypothesis does not require that.
Sure, but the SH requires some connection between the simulated universe and the simulator universe.
If you think of the entire ensemble of possible universes as a landscape, it is true that any point-universe in that landscape can be simulated by any other (of great enough complexity). However, that doesn’t mean the probability distribution is flat across the landscape.
The farther away the simulated universe is from the parent universe in this landscape, the less correlated, relevant, and useful it’s simulation is to the parent universe. In addition, the farther away you go in this landscape from the parent universe, the set of of possible universes one could simulate expands … at least exponentially.
The consequence of all this is that the probability distribution across potential universes that could be simulating us is tightly clustered around universes similar to ours—different sample points in the multiverse described by our same physics.
This does not follow. Similarity of the simulation to the ground universe is not necessarily connected in any useful way to utility.
Of course it is. We simulate systems to predict their future states and make the most profitable decisions. Simulation is integral to intelligence.
This has been mathematically formalized in AI theory and AIXI:
Intelligence is simulation-driven search through the landscape of potential realizable futures for the path that maximizes future utility.
This does not follow. Similarity of the simulation to the ground universe is not necessarily connected in any useful way to utility.
Of course it is. We simulate systems to predict their future states and make the most profitable decisions. Simulation is integral to intelligence.
No. See my earlier example with cellular automata. Our universe isn’t based on cellular automata but we’d still be interested in running simulations of large universes with such a base just because they are interesting. The fact that our universe has very little similarity to those universes doesn’t reduce my utility in running such simulations.
That said, I agree that there should be a rough correlation where we’d expect universes to be more likely to simulate universes similar to them. I don’t think this necessarily has anything to do with utility though, more that entities are more likely to monkey around with the laws of their own universes and see what happens. Due to something like an anchoring effect, entities should be more likely to imagine universes that are in some way closer to their own universe compared to the massive landscape of possible universes.
But, that similarity could be so weak as to have little or no connection to whether the simulators care about the simulated universe.
Yes, this precisely is the primary utility for the creator.
But humans do this too, for intelligence is all about simulation. We created computers to further amplify our simulation/intelligence.
I agree mostly with what you’re saying, but let me clarify. I am fully aware of the practical limitations, by functionally omniscient, I meant they can analyze and observe any aspect of the simulation from a variety of perspectives, using senses far beyond what we can imagine, and the flow of time itself need not be linear or continuous. This doesn’t mean they are concerned with every little detail all of the time, but I find it difficult to believe that anything important, from their perspective, would be missed.
And yes of course our morality appears to have evolved through natural genetic/memetic evolution, but the SA chains that morality with the creator’s morality in several fashions. First, as we are close to the historical ancestors of the creator, our morality is also their historical morality. And second, to the extent we can predict and model the future evolution of our own descendant’s morality, we are predicting the creator’s morality. You know: “As man is, god was, as god is, man shall become”
I’m not sure about your ‘religious edifice’, and what assertions are unevidenced.
This only makes sense in the very narrow version of the simulation hypothesis under which the simulators are in some way descended from humans or products of human intervention. That’s not necessarily the case.
That’s true, but I”m not sure if the “very narrow” qualifier is accurate. The creator candidates are: future humans, future aliens, ancient aliens. I think utility functions for any simulator civilizations will be structurally similar as they stem from universal physics, but perhaps that of future humans will be the most connected to our current.
No. You are assuming that the simulators are evolved entities. They could also be AIs for example. Moreover, there’s no very good reason to assume that the moral systems would be similar. For example, consider if we had the ability to make very rough simulations and things about as intelligent as insects evolved in the simulation. Would we care? No. Nor, would our moral sense in any way match theirs. So now if one has for example some thing that is vastly smarter than humans and lives in some strange 5 dimensional space. It is wondering if star formation can occur in 3-dimensions and if so how it behaved. The fact that there’s something resembling fairly stupid life that has shown up on some parts of its system isn’t going to matter to it, unless some of it does something that interferes with what the entity is trying to learn (say the humans decide to start making Dyson spheres or engage in star lifting).
Incidentally, even this one could pattern match to some forms of theism (For God’s ways are not our ways...), which leads to a more general problem with this discussion. Apologetics and theology of most major religions has managed to say so many contradictory things (In this case the dueling claims are that we can’t comprehend God’s mysterious, ineffable plans, and that God has a moral system that matches ours.) So it isn’t hard to find something that pattern matches with any given claim.
The primary strong reason to not care about simulationism has nothing to do with whether or not it is has a resemblance to theism, but for the simple reason that it doesn’t predict anything useful. There’s no evidence of intervention, and we have no idea what probabilities to assign to different types of simulators. So the hypothesis can’t pay rent.
AI’s don’t just magically pop out of nothing. Like anything else under the sun, they systemically evolve from existing patterns. They will evolve from our existant technosphere/noosphere (the realm of competing technologies and ideas).
I would be surprised if future posthumans, or equivalent Singularity-tech aliens, would have moral systems just like ours.
On the other hand, moral or goal systems are not random, and are subject to evolutionary pressure just as much as anything else. So as we understand our goal systems or morality and develop more of a science of it, we can understand it in objective terms, how it is likely to evolve, and learn the shape of likely future goal systems of superintelligences in this universe.
Your insect example is not quite accurate. There are people right now who are simulating the evolution of early insects. Yes the number of researchers is small and they are currently just doing very rough weak simulation using their biological brains, but nonetheless. Also, our current time period does not appear to be a random sample in terms of historical importance. In fact, we happen to live in a moment which is probably of extremely high future historical importance. This is loosely predicted by the SA.
We do have a methodology of assigning probabilities to different types of simulators. First you start with a model of our universe and fill in the important gaps concerning the unobservables—both in the present in terms of potential alien civilizations, and in the future in terms of the shape of our future. Of this set of Singularity level civilizations, we can expect them to run simulations of our current slice of space-time in proportion to it’s utility vs the expected utility of simulating other slices of space-time.
They could also run and are likely to run simulations of space-time pockets in other universes unlike ours, fictional universes, etc. However a general rule applies—the more dissimilar the simulated universe is to the parent universe, the vaster the space of configurations becomes and the less utility the simulation has. So we can expect that the parent universe is roughly similar to ours.
The question of evidence for intervention depends on the quality of the evidence itself and the prior. The SA helps us to understand the prior.
Before the SA there was no mechanism for a creator, and so the prior for intervention was zero regardless of the evidence. That is no longer the case. (Nor is it yet a case for intervention)
Again, you are assuming that the entities arise from human intervention. The Simulation Hypothesis does not require that.
How is it not accurate? I fail to see how the presence of such research makes my point invalid.
This does not follow. Similarity of the simulation to the ground universe is not necessarily connected in any useful way to utility. For example, universes that work off of cellar automata would be really interesting despite the fact that our universe doesn’t seem to operate in that fashion.
This confuses me. Generally, the problem with assigning a prior of zero to a claim is just what you’ve said here, that it is stuck at zero no matter how much you update with evidence. This is bad. But, you then seem to be asserting that an update did occur due to the simulation hypothesis. This leaves me confused.
Sure, but the SH requires some connection between the simulated universe and the simulator universe.
If you think of the entire ensemble of possible universes as a landscape, it is true that any point-universe in that landscape can be simulated by any other (of great enough complexity). However, that doesn’t mean the probability distribution is flat across the landscape.
The farther away the simulated universe is from the parent universe in this landscape, the less correlated, relevant, and useful it’s simulation is to the parent universe. In addition, the farther away you go in this landscape from the parent universe, the set of of possible universes one could simulate expands … at least exponentially.
The consequence of all this is that the probability distribution across potential universes that could be simulating us is tightly clustered around universes similar to ours—different sample points in the multiverse described by our same physics.
Of course it is. We simulate systems to predict their future states and make the most profitable decisions. Simulation is integral to intelligence.
This has been mathematically formalized in AI theory and AIXI:
Intelligence is simulation-driven search through the landscape of potential realizable futures for the path that maximizes future utility.
No. See my earlier example with cellular automata. Our universe isn’t based on cellular automata but we’d still be interested in running simulations of large universes with such a base just because they are interesting. The fact that our universe has very little similarity to those universes doesn’t reduce my utility in running such simulations.
That said, I agree that there should be a rough correlation where we’d expect universes to be more likely to simulate universes similar to them. I don’t think this necessarily has anything to do with utility though, more that entities are more likely to monkey around with the laws of their own universes and see what happens. Due to something like an anchoring effect, entities should be more likely to imagine universes that are in some way closer to their own universe compared to the massive landscape of possible universes.
But, that similarity could be so weak as to have little or no connection to whether the simulators care about the simulated universe.