I don’t think our intuitions about what “really happens” (versus what is “mathematically well defined”) are useful. I think we have to zoom out at least one level and realize that our moral and ethical intuitions only mean anything within our particular instantiation of our causal framework. We can’t be morally responsible for the notional space of computable torture simulations because they exist whether or not we “carry them out.” But perhaps we are morally responsible for particular instantiations of those algorithms.
I also want to draw attention to the statement in the original post,
If these simulations are sufficiently precise, then they will be people in and of themselves. The simulations could cause those people to suffer, and will likely kill them by ending the simulation when the prediction or answer is given.
This usage of “killing” is conceptually very distant from the intuitive notion for the reasons you (Pentashagon) indicate. I don’t feel that the matter of how to handle moral culpability for events occurring in causally disconnected algorithms is sufficiently settled that we can meaningfully have this conversion.
I’m going to shamelessly quote myself from a previous discussion on waterfall ethics,
I also want to draw attention to the statement in the original post,
This usage of “killing” is conceptually very distant from the intuitive notion for the reasons you (Pentashagon) indicate. I don’t feel that the matter of how to handle moral culpability for events occurring in causally disconnected algorithms is sufficiently settled that we can meaningfully have this conversion.