Clippy seems to be someone trying to make the point that a paperclip maximizer is not necessarily bad for the universe, where Eliezer uses a paperclip maximizer as the canonical example of how AGI could go horribly wrong. That’s not necessarily good evidence that it isn’t Eliezer, but Clippy’s views are out of sync with Eliezer’s views.
Eliezer’s point is not that a paperclip maximizer is bad for the universe, it’s that a superintelligent AGI paperclip maximizer is bad for the universe. Clippy’s views here seem actually more similar to Robin’s ideas that there is no reason for beings with radically divergent value systems not to live happily together and negotiate through trade.
Clippy seems to be someone trying to make the point that a paperclip maximizer is not necessarily bad for the universe
That’s exactly what a not-yet-superintelligent paperclip maximizer would want us to think.
(When Eliezer plays an AI in a box, the AI’s views are probably out of sync with Eliezer’s views too. There’s no rule that says the AI has to be truthful in the AI Box experiment, because there’s no such rule about AIs in reality. It’s supposed to be maximally persuasive, and you’re supposed to resist. If a paperclipper asserts x, then the right question to ask yourself is not “What should I do, given x?”, but “Why does the paperclipper want me to believe x?” The most general answer, by definition, will be something like “Because the paperclipper is executing an elaborate plan to convert the universe into paperclips, and it believes that my believing x will further that goal to some small or large degree”, which is at best orthogonal to “Because x is true”, probably even anticorrelated with it, and almost certainly anticorrelated to “Because believing x will further my goals” if you are a human.)
Psst! What makes you so sure Clippy isn’t, um, Kevin? I mean, asking that Kevin be given the money as compensation? That’s pretty much an admission right there...
That’s a possibility… but if you read through the whole history of Clippy, his asking for compensation to be sent to Kevin doesn’t really imply that—he’s asking for that because Kevin had promised to buy a bunch of paperclips in exchange for the money (or something like that), and so Clippy is trying to get Kevin paid via an intermediary.… for the sake of the paperclips.
so it could be Kevin playing both sides, but I don’t think that fact alone points in that direction.
What kind of special insight do you have regarding Clippy? I think this is a very important issue, because before we hand over such responsibilities to an alien being, we need to know as much as we can about Clippy’s motives and capabilities.
I don’t have any real special insight regarding Clippy.
I think there is practically zero chance that humanity makes Clippy it’s decision theoretic enforcer and that this post is somewhat out of touch with pragmatic human values as they exist right now.
though of course your knowledge with regards to that statement is incomplete compared to my knowledge
Oh, sorry, i assumed you were saying that you had some knowledge of who or what Clippy is. But obviously you’re in a position to see that you yourself are not Clippy. I feel terribly silly now.
Yeah, I’ve read through most of Clippy’s posts.… what makes you so sure it’s not Eliezer? Just that he’s currently working on his book?
Clippy seems awfully straightforward compared to Eliezer, which I realize isn’t a strong argument.
Eliezer would be a more believable Clippy.
Clippy seems to be someone trying to make the point that a paperclip maximizer is not necessarily bad for the universe, where Eliezer uses a paperclip maximizer as the canonical example of how AGI could go horribly wrong. That’s not necessarily good evidence that it isn’t Eliezer, but Clippy’s views are out of sync with Eliezer’s views.
Eliezer’s point is not that a paperclip maximizer is bad for the universe, it’s that a superintelligent AGI paperclip maximizer is bad for the universe. Clippy’s views here seem actually more similar to Robin’s ideas that there is no reason for beings with radically divergent value systems not to live happily together and negotiate through trade.
That’s exactly what a not-yet-superintelligent paperclip maximizer would want us to think.
(When Eliezer plays an AI in a box, the AI’s views are probably out of sync with Eliezer’s views too. There’s no rule that says the AI has to be truthful in the AI Box experiment, because there’s no such rule about AIs in reality. It’s supposed to be maximally persuasive, and you’re supposed to resist. If a paperclipper asserts x, then the right question to ask yourself is not “What should I do, given x?”, but “Why does the paperclipper want me to believe x?” The most general answer, by definition, will be something like “Because the paperclipper is executing an elaborate plan to convert the universe into paperclips, and it believes that my believing x will further that goal to some small or large degree”, which is at best orthogonal to “Because x is true”, probably even anticorrelated with it, and almost certainly anticorrelated to “Because believing x will further my goals” if you are a human.)
Or “Why does the paperclipper want me to believe it wants me to believe x?”, or something with a couple extra layers of recursion.
Or, to flatten the recursion out, “Why did the paperclipper assert x?”.
(Tangential cognitive silly time: I notice that I feel literally racist saying things like this around Clippy.)
Hmmm, I’ve read his entire posting history, and that’s not the impression I got. I could be wrong though
Psst! What makes you so sure Clippy isn’t, um, Kevin? I mean, asking that Kevin be given the money as compensation? That’s pretty much an admission right there...
That’s a possibility… but if you read through the whole history of Clippy, his asking for compensation to be sent to Kevin doesn’t really imply that—he’s asking for that because Kevin had promised to buy a bunch of paperclips in exchange for the money (or something like that), and so Clippy is trying to get Kevin paid via an intermediary.… for the sake of the paperclips.
so it could be Kevin playing both sides, but I don’t think that fact alone points in that direction.
I am not Clippy, though of course your knowledge with regards to that statement is incomplete compared to my knowledge...
Kevin—this sort of weak, easily-faked signal is exactly the sort of thing you were trying to deal with in writing this post!
What kind of special insight do you have regarding Clippy? I think this is a very important issue, because before we hand over such responsibilities to an alien being, we need to know as much as we can about Clippy’s motives and capabilities.
I don’t have any real special insight regarding Clippy.
I think there is practically zero chance that humanity makes Clippy it’s decision theoretic enforcer and that this post is somewhat out of touch with pragmatic human values as they exist right now.
Oh, sorry, i assumed you were saying that you had some knowledge of who or what Clippy is. But obviously you’re in a position to see that you yourself are not Clippy. I feel terribly silly now.
No need to feel silly, as I wrote that sentence I thought that it might be confusing.