AI’s don’t just magically pop out of nothing. Like anything else under the sun, they systemically evolve from existing patterns. They will evolve from our existant technosphere/noosphere (the realm of competing technologies and ideas).
Again, you are assuming that the entities arise from human intervention. The Simulation Hypothesis does not require that.
Your insect example is not quite accurate. There are people right now who are simulating the evolution of early insects.
How is it not accurate? I fail to see how the presence of such research makes my point invalid.
However a general rule applies—the more dissimilar the simulated universe is to the parent universe, the vaster the space of configurations becomes and the less utility the simulation has. So we can expect that the parent universe is roughly similar to ours.
This does not follow. Similarity of the simulation to the ground universe is not necessarily connected in any useful way to utility. For example, universes that work off of cellar automata would be really interesting despite the fact that our universe doesn’t seem to operate in that fashion.
Before the SA there was no mechanism for a creator, and so the prior for intervention was zero regardless of the evidence. That is no longer the case. (Nor is it yet a case for intervention)
This confuses me. Generally, the problem with assigning a prior of zero to a claim is just what you’ve said here, that it is stuck at zero no matter how much you update with evidence. This is bad. But, you then seem to be asserting that an update did occur due to the simulation hypothesis. This leaves me confused.
No. You are assuming that the simulators are evolved entities. They could also be AIs for example
They will evolve from our existant technosphere/noosphere (the realm of competing technologies and ideas).
Again, you are assuming that the [simulator] entities arise from human intervention. The Simulation Hypothesis does not require that.
Sure, but the SH requires some connection between the simulated universe and the simulator universe.
If you think of the entire ensemble of possible universes as a landscape, it is true that any point-universe in that landscape can be simulated by any other (of great enough complexity). However, that doesn’t mean the probability distribution is flat across the landscape.
The farther away the simulated universe is from the parent universe in this landscape, the less correlated, relevant, and useful it’s simulation is to the parent universe. In addition, the farther away you go in this landscape from the parent universe, the set of of possible universes one could simulate expands … at least exponentially.
The consequence of all this is that the probability distribution across potential universes that could be simulating us is tightly clustered around universes similar to ours—different sample points in the multiverse described by our same physics.
This does not follow. Similarity of the simulation to the ground universe is not necessarily connected in any useful way to utility.
Of course it is. We simulate systems to predict their future states and make the most profitable decisions. Simulation is integral to intelligence.
This has been mathematically formalized in AI theory and AIXI:
Intelligence is simulation-driven search through the landscape of potential realizable futures for the path that maximizes future utility.
This does not follow. Similarity of the simulation to the ground universe is not necessarily connected in any useful way to utility.
Of course it is. We simulate systems to predict their future states and make the most profitable decisions. Simulation is integral to intelligence.
No. See my earlier example with cellular automata. Our universe isn’t based on cellular automata but we’d still be interested in running simulations of large universes with such a base just because they are interesting. The fact that our universe has very little similarity to those universes doesn’t reduce my utility in running such simulations.
That said, I agree that there should be a rough correlation where we’d expect universes to be more likely to simulate universes similar to them. I don’t think this necessarily has anything to do with utility though, more that entities are more likely to monkey around with the laws of their own universes and see what happens. Due to something like an anchoring effect, entities should be more likely to imagine universes that are in some way closer to their own universe compared to the massive landscape of possible universes.
But, that similarity could be so weak as to have little or no connection to whether the simulators care about the simulated universe.
Again, you are assuming that the entities arise from human intervention. The Simulation Hypothesis does not require that.
How is it not accurate? I fail to see how the presence of such research makes my point invalid.
This does not follow. Similarity of the simulation to the ground universe is not necessarily connected in any useful way to utility. For example, universes that work off of cellar automata would be really interesting despite the fact that our universe doesn’t seem to operate in that fashion.
This confuses me. Generally, the problem with assigning a prior of zero to a claim is just what you’ve said here, that it is stuck at zero no matter how much you update with evidence. This is bad. But, you then seem to be asserting that an update did occur due to the simulation hypothesis. This leaves me confused.
Sure, but the SH requires some connection between the simulated universe and the simulator universe.
If you think of the entire ensemble of possible universes as a landscape, it is true that any point-universe in that landscape can be simulated by any other (of great enough complexity). However, that doesn’t mean the probability distribution is flat across the landscape.
The farther away the simulated universe is from the parent universe in this landscape, the less correlated, relevant, and useful it’s simulation is to the parent universe. In addition, the farther away you go in this landscape from the parent universe, the set of of possible universes one could simulate expands … at least exponentially.
The consequence of all this is that the probability distribution across potential universes that could be simulating us is tightly clustered around universes similar to ours—different sample points in the multiverse described by our same physics.
Of course it is. We simulate systems to predict their future states and make the most profitable decisions. Simulation is integral to intelligence.
This has been mathematically formalized in AI theory and AIXI:
Intelligence is simulation-driven search through the landscape of potential realizable futures for the path that maximizes future utility.
No. See my earlier example with cellular automata. Our universe isn’t based on cellular automata but we’d still be interested in running simulations of large universes with such a base just because they are interesting. The fact that our universe has very little similarity to those universes doesn’t reduce my utility in running such simulations.
That said, I agree that there should be a rough correlation where we’d expect universes to be more likely to simulate universes similar to them. I don’t think this necessarily has anything to do with utility though, more that entities are more likely to monkey around with the laws of their own universes and see what happens. Due to something like an anchoring effect, entities should be more likely to imagine universes that are in some way closer to their own universe compared to the massive landscape of possible universes.
But, that similarity could be so weak as to have little or no connection to whether the simulators care about the simulated universe.