I think I get what you’re saying, but I’m not sure I agree. If the paperclip maximizer worked by simulating trillions of human-like agents doing fulfilling intellectual tasks, I’d be very sad to press the button. If I were convinced that pressing the button would result in less agent-eudaimonia-time over the universe’s course, I wouldn’t press it at all.
...so I’m probably a pretty ideal target audience for your post/sequence. Looking forward to it!
This is nuking the hypothetical. For any action that someone claims to be a good idea, one can specify a world where taking that action causes some terrible outcome.
If the paperclip maximizer worked by simulating trillions of human-like agents doing fulfilling intellectual tasks, I’d be very sad to press the button.
If you would be sad because and only because it were simulating humans (rather than because the paperclipper were conscious), my point goes through.
I think I get what you’re saying, but I’m not sure I agree. If the paperclip maximizer worked by simulating trillions of human-like agents doing fulfilling intellectual tasks, I’d be very sad to press the button. If I were convinced that pressing the button would result in less agent-eudaimonia-time over the universe’s course, I wouldn’t press it at all.
...so I’m probably a pretty ideal target audience for your post/sequence. Looking forward to it!
This is nuking the hypothetical. For any action that someone claims to be a good idea, one can specify a world where taking that action causes some terrible outcome.
If you would be sad because and only because it were simulating humans (rather than because the paperclipper were conscious), my point goes through.
Ta!