I suppose it’s that I naively expect, when opening the list of top LW posts ever, to see ones containing the most impressive or clever insights into rationality.
Not that I don’t think Holden’s post deserves a high score for other reasons. While I am not terribly impressed with his AI-related arguments, the post is of the very highest standards of conduct, of how to have a disagreement that is polite and far beyond what is usually named “constructive”.
My own primary inference from the popularity of this post is that there’s a lot of uncertainty/disagreement within the community about the idea that creating an AGI without an explicit (and properly tuned) moral structure constitutes significant existential risk, but that the social dynamics of the community cause most of that uncertainty/disagreement to go unvoiced most of the time.
Of course, there’s lots of other stuff going on as well that has little to do with AGI or existential risk, and a lot to do with the social dynamics of the community itself.
Can you articulate the nature of your confusion?
I suppose it’s that I naively expect, when opening the list of top LW posts ever, to see ones containing the most impressive or clever insights into rationality.
Not that I don’t think Holden’s post deserves a high score for other reasons. While I am not terribly impressed with his AI-related arguments, the post is of the very highest standards of conduct, of how to have a disagreement that is polite and far beyond what is usually named “constructive”.
(nods) Makes sense.
My own primary inference from the popularity of this post is that there’s a lot of uncertainty/disagreement within the community about the idea that creating an AGI without an explicit (and properly tuned) moral structure constitutes significant existential risk, but that the social dynamics of the community cause most of that uncertainty/disagreement to go unvoiced most of the time.
Of course, there’s lots of other stuff going on as well that has little to do with AGI or existential risk, and a lot to do with the social dynamics of the community itself.
Maybe. I upvoted it because it will have (and has had) the effect of improving SI’s chances.
Some people who upvoted the post may think it is one of the best-written and most important examples of instrumental rationality on this site.