To me, and I expect to a group of other readers as well given upvotes to my comments and those of @subconvergence, this is a direction we really think LessWrong should not go in.
I think Raemon is able to see some appeals of this post that are something like:
Boggling at the world is good, and we don’t get as much of that as we’d like
People reading this post may be able to form their own takeaway, which is something like “man, we really should be checking more comprehensively whether dogs can talk.”, and ideas like that are worth sharing.
I think there are some significant problems with this, however:
This post doesn’t really advocate boggling at the world, instead it makes specific, strongly worded claims that there is already decently meaningful evidence of something.
This post does not really advocate for that takeaway, and to the extent someone thinks it does, it’s certainly far from the most prominent message.
If LWers took-this-to-heart, there are likely between 10,000-1,000,000 things of similar potential interest and evidence base that could be shared in this way, the vast majority of which are highly unlikely to be proven out.
Furthermore, in addition to thinking that if I’ve guessed at Raemon’s motivations correctly, they’re based on flawed reasoning for the above 3 reasons, I think there are other factors that make this actively damaging:
This makes strongly worded claims that are either untestable (assumptions on LW readership’s priors in title), or not nearly supported by the evidence.
It employs a number of techniques that, to some, are seemingly effective at actively misleading them (see bullets 2 and 3 of my previous comment, I also find a number more employed in the author’s comments).
This, to me, pattern matches with LWers having much less critical readership than expected, and than should be warranted when confronting a post of this type, which I’ve observed in multiple prominent instances over the past ~3 months of newly close engagement. I think continued promotion of articles with poor calibration, with a lack of critical comments (and instead unwarranted enthusiasm), will further degrade LW’s state of discourse. Even if this is the only article of this type, and I’m wrong about some of the others, I think even one instance of promoting something that is a better fit for Buzzfeed is at least somewhat damaging.
I do think there’s a post that fits my guess at Raemon’s appeal of this post (the two bullets above) and avoids all the numbered issues I outline afterward, but this post is a far departure from that hypothetical one.
To me, and I expect to a group of other readers as well given upvotes to my comments and those of @subconvergence, this is a direction we really think LessWrong should not go in.
I think Raemon is able to see some appeals of this post that are something like:
Boggling at the world is good, and we don’t get as much of that as we’d like
People reading this post may be able to form their own takeaway, which is something like “man, we really should be checking more comprehensively whether dogs can talk.”, and ideas like that are worth sharing.
I think there are some significant problems with this, however:
This post doesn’t really advocate boggling at the world, instead it makes specific, strongly worded claims that there is already decently meaningful evidence of something.
This post does not really advocate for that takeaway, and to the extent someone thinks it does, it’s certainly far from the most prominent message.
If LWers took-this-to-heart, there are likely between 10,000-1,000,000 things of similar potential interest and evidence base that could be shared in this way, the vast majority of which are highly unlikely to be proven out.
Furthermore, in addition to thinking that if I’ve guessed at Raemon’s motivations correctly, they’re based on flawed reasoning for the above 3 reasons, I think there are other factors that make this actively damaging:
This makes strongly worded claims that are either untestable (assumptions on LW readership’s priors in title), or not nearly supported by the evidence.
It employs a number of techniques that, to some, are seemingly effective at actively misleading them (see bullets 2 and 3 of my previous comment, I also find a number more employed in the author’s comments).
This, to me, pattern matches with LWers having much less critical readership than expected, and than should be warranted when confronting a post of this type, which I’ve observed in multiple prominent instances over the past ~3 months of newly close engagement. I think continued promotion of articles with poor calibration, with a lack of critical comments (and instead unwarranted enthusiasm), will further degrade LW’s state of discourse. Even if this is the only article of this type, and I’m wrong about some of the others, I think even one instance of promoting something that is a better fit for Buzzfeed is at least somewhat damaging.
I do think there’s a post that fits my guess at Raemon’s appeal of this post (the two bullets above) and avoids all the numbered issues I outline afterward, but this post is a far departure from that hypothetical one.