It’s not that paperclip maximizers are unrealistic. It’s that they are not really that bad.
I’ve encountered this view a few times in the futurist crowd, but overall it seems to be pretty rare. Most people seem to think that {universe mostly full of identical paperclips} is worse than {universe full of diverse conscious entities having fun}, but it’s relatively common to think that {universe mostly full of identical paperclips} is not a likely outcome from unaligned AI.
Mostly though this seems to be a quantitative issue: if paperclips are halfway between extinction and flourishing, then paperclipping is nearly as bad and avoiding it is nearly as important.
Most people seem to think that {universe mostly full of identical paperclips} is worse than {universe full of diverse conscious entities having fun}
Yes, I think that too. You’re confusing “I’d be happy with either X or Y” with “I have no preference between X and Y”.
Mostly though this seems to be a quantitative issue: if paperclips are halfway between extinction and flourishing, then paperclipping is nearly as bad and avoiding it is nearly as important.
Most issues are quantitative. And if paperclips are 99% of the way from extinction to flourishing (whatever exactly that means), then paperclipping is pretty good.
Yes, I think that too. You’re confusing “I’d be happy with either X or Y” with “I have no preference between X and Y”
I may have misunderstood. It sounds like your comment probably isn’t relevant to the point of my post, except insofar as I describe a view which isn’t your view. I would also agree that paperclipping is better than extinction.
It sounds like your comment probably isn’t relevant to the point of my post, except insofar as I describe a view which isn’t your view.
Yes, you describe a view that isn’t my view, and then use that view to criticize intuitions that are similar to my intuitions. The view you describe is making simple errors that should be easy to correct, and my view isn’t. I don’t really know how the group of “people who aren’t too worried about paperclipping” breaks down between “people who underestimate P(paperclipping)” and “people who think paperclipping is ok, even if suboptimal” in numbers, maybe the latter really is rare. But the former group should shrink with some education, and the latter might grow from it.
[Moderator note: I wrote a warning to you on another post a few days ago, so this is your second warning. The next warning will result in a temporary ban.]
Basically everything I said in my last comment still holds:
I’ve recently found that your comments pretty reliably ended up in frustrating conversations for both parties (multiple authors and commenters have sent us PMs complaining about their interactions with you), were often downvoted, and often just felt like they were missing the point of the original article.
You are clearly putting a lot of time into commenting on LW, and I think that’s good, but I think right now it would be a lot better if you would comment less often, and try to increase the average quality of the comments you write. I think right now you are taking up a lot of bandwidth on the site, disproportionate to the quality of your contributions.
Since then, it does not seem like you significantly reduced the volume of comments you’ve been writing, and I have not perceived a significant increase in the amount of thought and effort that goes into every single one of your comments. I continue to think that you could be a great contributor to LessWrong, but also think that for that to happen, it seems necessary that you take on significantly more interpretative labor in your comments, and put more effort into being clear. It still appears that most comment exchanges that involve you cause most readers and co-commenters to feel attacked by you or misunderstand you, and quickly get frustrated.
I think it might be the correct call (though I obviously don’t know your constraints and thought-habits around commenting here) to aim to write one comment per day, instead of an average of three, with that one comment having three times as much thought and care put into it, and with particular attention towards trying to be more collaborative, instead of adversarial.
I’ve encountered this view a few times in the futurist crowd, but overall it seems to be pretty rare. Most people seem to think that {universe mostly full of identical paperclips} is worse than {universe full of diverse conscious entities having fun}, but it’s relatively common to think that {universe mostly full of identical paperclips} is not a likely outcome from unaligned AI.
Mostly though this seems to be a quantitative issue: if paperclips are halfway between extinction and flourishing, then paperclipping is nearly as bad and avoiding it is nearly as important.
Yes, I think that too. You’re confusing “I’d be happy with either X or Y” with “I have no preference between X and Y”.
Most issues are quantitative. And if paperclips are 99% of the way from extinction to flourishing (whatever exactly that means), then paperclipping is pretty good.
I may have misunderstood. It sounds like your comment probably isn’t relevant to the point of my post, except insofar as I describe a view which isn’t your view. I would also agree that paperclipping is better than extinction.
Yes, you describe a view that isn’t my view, and then use that view to criticize intuitions that are similar to my intuitions. The view you describe is making simple errors that should be easy to correct, and my view isn’t. I don’t really know how the group of “people who aren’t too worried about paperclipping” breaks down between “people who underestimate P(paperclipping)” and “people who think paperclipping is ok, even if suboptimal” in numbers, maybe the latter really is rare. But the former group should shrink with some education, and the latter might grow from it.
[Moderator note: I wrote a warning to you on another post a few days ago, so this is your second warning. The next warning will result in a temporary ban.]
Basically everything I said in my last comment still holds:
Since then, it does not seem like you significantly reduced the volume of comments you’ve been writing, and I have not perceived a significant increase in the amount of thought and effort that goes into every single one of your comments. I continue to think that you could be a great contributor to LessWrong, but also think that for that to happen, it seems necessary that you take on significantly more interpretative labor in your comments, and put more effort into being clear. It still appears that most comment exchanges that involve you cause most readers and co-commenters to feel attacked by you or misunderstand you, and quickly get frustrated.
I think it might be the correct call (though I obviously don’t know your constraints and thought-habits around commenting here) to aim to write one comment per day, instead of an average of three, with that one comment having three times as much thought and care put into it, and with particular attention towards trying to be more collaborative, instead of adversarial.