I understood the original stipulation that the simulation doesn’t interact with our world to mean that we can’t affect it to rescue the suffering person.
Let’s consider your alternative scenario: the person in the simulation can’t affect our universe usefully (the simulating machine is well-wrapped and looks like a uniform black body from the outside), and we can’t observe it directly, but we know there’s a suffering person inside and we can choose to break in and modify (or stop) the simulation.
In this situation I would indeed choose to intervene to stop the suffering. Your question is a very good one. Why do I choose here to accept the ‘default’ interpretation which says that inside the simulation is a suffering person?
The simple answer is that I’m human, and I don’t have an explicit or implicit-and-consistent utility function anyway. If people around me tell me there’s a suffering person inside the simulation, I’d be inclined to accept this view.
How much effort or money would I be willing to spend to help that suffering simulated person? Probably zero or near zero. There are many real people alive today who are suffering and I’ve never done anything to explicitly help anyone anonymously.
In my previous comments I was thinking about utility functions in general—what is possible, self-consistent, and optimizes something—rather than human utility functions or my own. As far as I personally am concerned, I do indeed accept the ‘default’ interpretation of a simulation (when forced to make a judgement) because it’s easiest to operate that way and my main goal (in adjusting my utility function) is to achieve my supergoals smoothly, rather than to achieve some objectively correct super-theory of morals. Thanks for helping me see that.
I understood the original stipulation that the simulation doesn’t interact with our world to mean that we can’t affect it to rescue the suffering person.
Let’s consider your alternative scenario: the person in the simulation can’t affect our universe usefully (the simulating machine is well-wrapped and looks like a uniform black body from the outside), and we can’t observe it directly, but we know there’s a suffering person inside and we can choose to break in and modify (or stop) the simulation.
In this situation I would indeed choose to intervene to stop the suffering. Your question is a very good one. Why do I choose here to accept the ‘default’ interpretation which says that inside the simulation is a suffering person?
The simple answer is that I’m human, and I don’t have an explicit or implicit-and-consistent utility function anyway. If people around me tell me there’s a suffering person inside the simulation, I’d be inclined to accept this view.
How much effort or money would I be willing to spend to help that suffering simulated person? Probably zero or near zero. There are many real people alive today who are suffering and I’ve never done anything to explicitly help anyone anonymously.
In my previous comments I was thinking about utility functions in general—what is possible, self-consistent, and optimizes something—rather than human utility functions or my own. As far as I personally am concerned, I do indeed accept the ‘default’ interpretation of a simulation (when forced to make a judgement) because it’s easiest to operate that way and my main goal (in adjusting my utility function) is to achieve my supergoals smoothly, rather than to achieve some objectively correct super-theory of morals. Thanks for helping me see that.