“A completely unaligned system would be [completely] useless”
A paperclip maximizer (or human suffering maximizer) is completely unaligned (or worse)
It is possible in principle to safely make some money by appropriate use of a paperclip maximizer (or human suffering maximizer), and therefore such an AI is not completely useless.
Right? If so, which of those three do you disagree with?
OK, you may assume that none of the humans care about paperclips, and all of the humans want human suffering to go down rather than up. This includes the people who programmed the AIs, the people interacting with the AI, and human bystanders. Now can you answer the question?
(Meta-note: I think the contents of the above paragraph were very obvious from context—so much so that I’m starting to get a feeling that you’re not engaging in this discussion out of a good-faith desire to figure out why we’re disagreeing.)
You can’t believe all three of:
“A completely unaligned system would be [completely] useless”
A paperclip maximizer (or human suffering maximizer) is completely unaligned (or worse)
It is possible in principle to safely make some money by appropriate use of a paperclip maximizer (or human suffering maximizer), and therefore such an AI is not completely useless.
Right? If so, which of those three do you disagree with?
Alignment is a two place predicate. If you’re into paperdclips, a paperclipper is aligned with you
OK, you may assume that none of the humans care about paperclips, and all of the humans want human suffering to go down rather than up. This includes the people who programmed the AIs, the people interacting with the AI, and human bystanders. Now can you answer the question?
(Meta-note: I think the contents of the above paragraph were very obvious from context—so much so that I’m starting to get a feeling that you’re not engaging in this discussion out of a good-faith desire to figure out why we’re disagreeing.)