Great to see this studied systematically—it updated me in some ways.
Given that the study measures how likeable, agreeable, and informative people found each article, regardless of the topic, could it be that the study measures something different from “how effective was this article at convincing the reader to take AI risk seriously”? In fact, it seems like the contest could have been won by an article that isn’t about AI risk at all. The top-rated article (Steinhardt’s blog series) spends little time explaining AI risk: Mostly just (part of) the last of four posts. The main point of this series seems to be that ‘More Is Different for AI’, which is presumably less controversial than focusing on AI risk, but not necessarily effective at explaining AI risk.
Great to see this studied systematically—it updated me in some ways.
Given that the study measures how likeable, agreeable, and informative people found each article, regardless of the topic, could it be that the study measures something different from “how effective was this article at convincing the reader to take AI risk seriously”? In fact, it seems like the contest could have been won by an article that isn’t about AI risk at all. The top-rated article (Steinhardt’s blog series) spends little time explaining AI risk: Mostly just (part of) the last of four posts. The main point of this series seems to be that ‘More Is Different for AI’, which is presumably less controversial than focusing on AI risk, but not necessarily effective at explaining AI risk.