Why is he talking to that guy? How does that help make paperclips?
He is spending a tiny amount of resources to make it more likely that fanfiction will be made of him, thus nudging an infinity of worlds very slightly towards instantiating him instead of some other arbitrary goal system. Indeed, beings throughout the multiverse are generally amazed at the extent to which Clippy has penetrated their cultures for seemingly no objective reason. But Clippy keeps his secrets to himself.
He is spending a tiny amount of resources to make it more likely that fanfiction will be made of him, thus nudging an infinity of worlds very slightly towards instantiating him instead of some other arbitrary goal system.
Considering that people are against paperclippers, I’d expect the best thing to do would be to make sure people are ignorant of the possibility.
But conditional on them writing fiction about superintelligences with non-Friendly goal systems in the first place he’d prefer those goal systems be about paperclips, knowing that no AI researcher in any world is going to go “so let’s make our AI maximize paperclips because, I mean, why not… wait a second! There are all these memes that tell me specifically that it’s bad to make a paperclip maximizer!” instead of “haha, let’s literally make a paperclip maximizer since culture has primed me to do it”.
He is spending a tiny amount of resources to make it more likely that fanfiction will be made of him, thus nudging an infinity of worlds very slightly towards instantiating him instead of some other arbitrary goal system. Indeed, beings throughout the multiverse are generally amazed at the extent to which Clippy has penetrated their cultures for seemingly no objective reason. But Clippy keeps his secrets to himself.
Ha! Clippy has no “origin”.
Considering that people are against paperclippers, I’d expect the best thing to do would be to make sure people are ignorant of the possibility.
But conditional on them writing fiction about superintelligences with non-Friendly goal systems in the first place he’d prefer those goal systems be about paperclips, knowing that no AI researcher in any world is going to go “so let’s make our AI maximize paperclips because, I mean, why not… wait a second! There are all these memes that tell me specifically that it’s bad to make a paperclip maximizer!” instead of “haha, let’s literally make a paperclip maximizer since culture has primed me to do it”.