This is nuking the hypothetical. For any action that someone claims to be a good idea, one can specify a world where taking that action causes some terrible outcome.
If the paperclip maximizer worked by simulating trillions of human-like agents doing fulfilling intellectual tasks, I’d be very sad to press the button.
If you would be sad because and only because it were simulating humans (rather than because the paperclipper were conscious), my point goes through.
This is nuking the hypothetical. For any action that someone claims to be a good idea, one can specify a world where taking that action causes some terrible outcome.
If you would be sad because and only because it were simulating humans (rather than because the paperclipper were conscious), my point goes through.
Ta!