If your consequentialist ethics cares only about suffering sentient beings, then unless the simulations can affect the simulating agent in some way and render its actions less optimal, creating suffering beings is the only way there can be computation hazards.
If your ethics cares about other things like piles made of prime-numbered rocks, then that’s a computation hazard; or if the simulations can affect the simulator, that obviously opens a whole kettle of worms.
(For example, there’s apparently a twisty problem of ‘false proofs’ in the advanced decision theories where simulating a possible proof makes the agent decide to take a suboptimal choice; or the simulator could stumble upon a highly optimized program which takes it over. I’m sure there are other scenarios like that I haven’t thought of.)
If your consequentialist ethics cares only about suffering sentient beings, then unless the simulations can affect the simulating agent in some way and render its actions less optimal, creating suffering beings is the only way there can be computation hazards.
Agreed. The sentence I quoted seemed to indicate that Alex thought he had a counterexample, but it turns out we were just using different definitions of “computational hazards”.
The only counterexample I can think of is where the computation invents cures or writes symphonies and, in the course of computation, indifferently disposes of them. This could be considered a large negative consequence of “mere” computation, but yeah, not really.
If your consequentialist ethics cares only about suffering sentient beings, then unless the simulations can affect the simulating agent in some way and render its actions less optimal, creating suffering beings is the only way there can be computation hazards.
If your ethics cares about other things like piles made of prime-numbered rocks, then that’s a computation hazard; or if the simulations can affect the simulator, that obviously opens a whole kettle of worms.
(For example, there’s apparently a twisty problem of ‘false proofs’ in the advanced decision theories where simulating a possible proof makes the agent decide to take a suboptimal choice; or the simulator could stumble upon a highly optimized program which takes it over. I’m sure there are other scenarios like that I haven’t thought of.)
Agreed. The sentence I quoted seemed to indicate that Alex thought he had a counterexample, but it turns out we were just using different definitions of “computational hazards”.
The only counterexample I can think of is where the computation invents cures or writes symphonies and, in the course of computation, indifferently disposes of them. This could be considered a large negative consequence of “mere” computation, but yeah, not really.