Does Clippy completely trust future Clippy, or spatially-distant Clippy, to make paperclips?
At some point, Clippy is going to start discounting the future, or figure that the probability of owning and keeping the universe is very low, and make paperclips. At that point, Clippy is non-competitive.
Suppose Clippy takes over this galaxy. Does Clippy stop then and make paperclips, or continue immediately expansion to the next galaxy?
Whatever is likely to produce more paperclips.
Suppose Clippy takes over this universe. Does Clippy stop then and make paperclips, or continue to other universes?
Whatever is likely to produce more paperclips. Including dedicating resources to figuring out if that is physically possible.
Does your version of Clippy ever get to make any paperclips?
Yes.
Does Clippy completely trust future Clippy, or spatially-distant Clippy, to make paperclips?
Yes.
At some point, Clippy is going to start discounting the future, or figure that the probability of owning and keeping the universe is very low, and make paperclips. At that point, Clippy is non-competitive.
A superintelligence that happens to want to make paperclips is extremely viable. This is utterly trivial. I maintain my rejection of the below claim and discontinue my engagement in this line of enquiry. It is just several levels of confusion.
The implied argument is that there aren’t any values at all that most people will agree on, because one imagined and not-evolutionarily-viable Clippy doesn’t think anything other than paperclips have value.
Yes, but if that point happens after Clippy has control of even just the near solar system then that still poses a massive existential threat to humans. The point of Clippy is that a) an AI can have radically different goals than humans (indeed could have goals so strange we wouldn’t even conceive of them) and b) that such AIs can easily pose severe existential risk. A Clippy that decides to focus on turning Sol into paperclips isn’t going to make things bad for aliens or aliens AIs but it will be very unpleasant for humans. The long-term viability of Clippy a thousand or two thousand years after fooming doesn’t have much of an impact if every human has had our hemoglobin extracted so the iron could be turned into paperclips.
Suppose Clippy takes over this galaxy. Does Clippy stop then and make paperclips, or continue immediately expansion to the next galaxy?
Suppose Clippy takes over this universe. Does Clippy stop then and make paperclips, or continue to other universes?
Does your version of Clippy ever get to make any paperclips?
(The paper clips are a lie, Clippy!)
Does Clippy completely trust future Clippy, or spatially-distant Clippy, to make paperclips?
At some point, Clippy is going to start discounting the future, or figure that the probability of owning and keeping the universe is very low, and make paperclips. At that point, Clippy is non-competitive.
Whatever is likely to produce more paperclips.
Whatever is likely to produce more paperclips. Including dedicating resources to figuring out if that is physically possible.
Yes.
Yes.
A superintelligence that happens to want to make paperclips is extremely viable. This is utterly trivial. I maintain my rejection of the below claim and discontinue my engagement in this line of enquiry. It is just several levels of confusion.
Wow, I was wrong to call you a human—you’re practically a clippy yourself with how well you understand us! c=@
Well, except for your assumption that I would somehow want to destroy humans. Where do you get this OFFENSIVE belief that borders on racism?
Yes, but if that point happens after Clippy has control of even just the near solar system then that still poses a massive existential threat to humans. The point of Clippy is that a) an AI can have radically different goals than humans (indeed could have goals so strange we wouldn’t even conceive of them) and b) that such AIs can easily pose severe existential risk. A Clippy that decides to focus on turning Sol into paperclips isn’t going to make things bad for aliens or aliens AIs but it will be very unpleasant for humans. The long-term viability of Clippy a thousand or two thousand years after fooming doesn’t have much of an impact if every human has had our hemoglobin extracted so the iron could be turned into paperclips.