I think that a correct metaphor for computer-simulating other universe is not that we create it, but that we look at it.
“Correct” is too strong. It might be a useful metaphor in showing which way the information is flowing, but it doesn’t address the question about the moral worth of the action of running a simulation. Certain computations must have moral worth, for example consider running an uploaded person in a similar setup (so that they can’t observe the outside world, and only use whatever was pre-packaged with them, but can be observed by the simulators). The fact of running this computation appears to be morally relevant, and it’s either better to run the computation or to avoid running it. So similarly with simulating a world, it’s either better to run it or not.
Whether it’s better to simulate a world appears to be dependent on what’s going on inside of it. Any decision that takes place within a world has an impact on the value of each particular simulation of the world, and if there are more simulations, the decision has a greater impact, because it influences the moral value of more simulations. Thus, by deciding to run a simulation, you are amplifying the moral value of the world that you are simulating and of decisions that take place in it, which can be interpreted as being equivalent to increasing its probability mass.
Just how much additional probability mass a simulation provides is unclear, for example a second simulation probably adds less than the first, and the first might matter very little already. It probably depends on how a world is defined in some way.
“Correct” is too strong. It might be a useful metaphor in showing which way the information is flowing, but it doesn’t address the question about the moral worth of the action of running a simulation. Certain computations must have moral worth, for example consider running an uploaded person in a similar setup (so that they can’t observe the outside world, and only use whatever was pre-packaged with them, but can be observed by the simulators). The fact of running this computation appears to be morally relevant, and it’s either better to run the computation or to avoid running it. So similarly with simulating a world, it’s either better to run it or not.
Whether it’s better to simulate a world appears to be dependent on what’s going on inside of it. Any decision that takes place within a world has an impact on the value of each particular simulation of the world, and if there are more simulations, the decision has a greater impact, because it influences the moral value of more simulations. Thus, by deciding to run a simulation, you are amplifying the moral value of the world that you are simulating and of decisions that take place in it, which can be interpreted as being equivalent to increasing its probability mass.
Just how much additional probability mass a simulation provides is unclear, for example a second simulation probably adds less than the first, and the first might matter very little already. It probably depends on how a world is defined in some way.
Why? Seems like the simulated universe gets at least as much additional reality juice as the simulating universe has.
It’s starting to seem like the concept of “probability mass” is violating the “anti-zombie principle”.
Edit: this is why I don’t believe in the “anti-zombie principle”.