You take too much confidence in a confusing idea. What does it tell, exactly (in anticipated experience, moral evaluation of decisions, etc.)? What does the original question about the cause for the world mean?
When considering the simulation scenario, you are discussing the world in which a simulation is running. In that world, it does matter how many simulations are running, and what they are running: the stuff of the world is organized in certain patterns, the patterns that implement the simulations. For any non-trivial preference about states of that world, different ways of organizing its matter will be valued differently, and any (hypothetical) change in content of the simulations or their number is a change to the content of that world, change that will be either preferred or not.
Now, with the perfectly autonomous simulations, the inhabitants of those simulations have no knowledge of the content of the outside world, they don’t even have any knowledge about whether their own simulation exists (they are not able to pick out that particular hypothesis). But this lack of knowledge is a separate issue from moral weight of possible states of the (outside) world, or their own decisions possibly affecting the state of the outside world by getting observed. High level of uncertainty doesn’t rob specific situations of distinctions in value.
The simulation scenario challenges the definition of the “world” you are considering, as it’s fair game for decision-making. If the computation inside a simulation can affect the outside world, and an agent inside a simulation has preference detailed enough to distinguish between possible states of the outside world, then it’ll try to act in such a way as to make the outside worlds better. This is acausal control, the question of which world the agent “really” lives in is meaningless in this context, the agent is controlling all possibilities defined in terms of the agent and relevant to its preference, including the ones containing it inside a simulated “apparent” world.
There are other fundamental problems with this stuff, of course, like inability to say what “all possible mathematical structures” is. You won’t find this defined mathematically, it’s all confusion and analogy.
You take too much confidence in a confusing idea. What does it tell, exactly (in anticipated experience, moral evaluation of decisions, etc.)? What does the original question about the cause for the world mean?
When considering the simulation scenario, you are discussing the world in which a simulation is running. In that world, it does matter how many simulations are running, and what they are running: the stuff of the world is organized in certain patterns, the patterns that implement the simulations. For any non-trivial preference about states of that world, different ways of organizing its matter will be valued differently, and any (hypothetical) change in content of the simulations or their number is a change to the content of that world, change that will be either preferred or not.
Now, with the perfectly autonomous simulations, the inhabitants of those simulations have no knowledge of the content of the outside world, they don’t even have any knowledge about whether their own simulation exists (they are not able to pick out that particular hypothesis). But this lack of knowledge is a separate issue from moral weight of possible states of the (outside) world, or their own decisions possibly affecting the state of the outside world by getting observed. High level of uncertainty doesn’t rob specific situations of distinctions in value.
The simulation scenario challenges the definition of the “world” you are considering, as it’s fair game for decision-making. If the computation inside a simulation can affect the outside world, and an agent inside a simulation has preference detailed enough to distinguish between possible states of the outside world, then it’ll try to act in such a way as to make the outside worlds better. This is acausal control, the question of which world the agent “really” lives in is meaningless in this context, the agent is controlling all possibilities defined in terms of the agent and relevant to its preference, including the ones containing it inside a simulated “apparent” world.
There are other fundamental problems with this stuff, of course, like inability to say what “all possible mathematical structures” is. You won’t find this defined mathematically, it’s all confusion and analogy.