If your consequentialist ethics cares only about suffering sentient beings, then unless the simulations can affect the simulating agent in some way and render its actions less optimal, creating suffering beings is the only way there can be computation hazards.
Agreed. The sentence I quoted seemed to indicate that Alex thought he had a counterexample, but it turns out we were just using different definitions of “computational hazards”.
The only counterexample I can think of is where the computation invents cures or writes symphonies and, in the course of computation, indifferently disposes of them. This could be considered a large negative consequence of “mere” computation, but yeah, not really.
Agreed. The sentence I quoted seemed to indicate that Alex thought he had a counterexample, but it turns out we were just using different definitions of “computational hazards”.
The only counterexample I can think of is where the computation invents cures or writes symphonies and, in the course of computation, indifferently disposes of them. This could be considered a large negative consequence of “mere” computation, but yeah, not really.