I wish people would stop throwing this term around willy-nilly.
Not only is it not obvious to me that this post is an “info hazard”, but I don’t really know what you even mean by it. Is it the definition used in this recent Less Wrong post[1], or perhaps the one quoted in this Less Wrong Wiki entry[2]?
In any case, the OP seems to be presenting true (as far as I can tell) and useful (potentially life-saving, in fact!) information. If you’re going to casually drop labels like “infohazard” in reference to it, you ought to do a lot better than a justification-free “this is bad”. Civil or not, I’d like to see that critique.
If you think the OP is harmful, by all means do not let civility stop you from posting a comment that may mitigate that harm! If you really believe what you’re saying, that comment may save lives. So let’s have it!
EDIT:Like Zack, I will strong-upvote this extended critique if you post.
TL;DR: “Infohazard” means any kind of information that could be harmful in some fashion. Let’s use “cognitohazard” to describe information that could specifically harm the person who knows it.
An information hazard is a concept coined by Nick Bostrom in a 2011 paper[1] for Review of Contemporary Philosophy. He defines it as follows; “Information hazard: A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.”
[Not the origional poster, but I’ll give it a shot]
This argument seems to hinge mostly on if the majority of those expected to read this content end up being Less Wrong regulars or not—with the understanding that going viral e.g. reddit hug of death would drastically shift that distribution.
Even accepting everything in the post as true on it’s face it’s unlikely such info would take the CDC out of the top 5 sources of info on this for the average American, but it’s understandable people would come away with a different conclusion if lead here by some sensationalist clickbait headline and primed to do so. That entire line of argument is increadably speculative, but nessisarily so if viral inbound links up your readership two orders of magnitude. Harm and total readership would be very sensitive to the framing and virality of the referer. It’s maybe relevant to ask if content on this forum has gone viral previously and if so, to what degree it was helpful/harmful.
I’m not really decided one way or the other, but that private/member only post option sounds like a really good idea. It sounds like there’s some substance to this disagreement, but it also has a pascal’s mugging character to it that makes me very reluctant to endorse the “info hazard” claim. Harm reductions seems like a reasonable middle ground.
I think that the definition is completely clear. “Information hazards are risks that arise from the dissemination or the potential dissemination of true information that may cause harm or enable some agent to cause harm. Such hazards are often subtler than direct physical threats, and, as a consequence, are easily overlooked. ” This has nothing to do with existential risk.
If lower trust in the CDC will save lives, facts that reduce trust are not an infohazard, and if lower trust in the CDC will lead to more deaths, they are. So—GIVEN THAT THE FACTS ARE TRUE, the dispute seems to be about different predictive models, not confusion about what an infohazard is. Even then, the problem here is that the prediction itself is not sufficiently specific. Lower trust among what group, for example? Most Lesswrongers are unlikely to decide to oppose vaccines, for example, but there are people who read Lesswrong who do so.
But again, the claims were in some cases incorrect, they confuse the CDC with the Trump administration more broadly, and many are unreasonable post-hoc judgments about what the CDC should have done that I think make the CDC look worse than a reasonable observer should conclude.
I wish people would stop throwing this term around willy-nilly.
Not only is it not obvious to me that this post is an “info hazard”, but I don’t really know what you even mean by it. Is it the definition used in this recent Less Wrong post[1], or perhaps the one quoted in this Less Wrong Wiki entry[2]?
In any case, the OP seems to be presenting true (as far as I can tell) and useful (potentially life-saving, in fact!) information. If you’re going to casually drop labels like “infohazard” in reference to it, you ought to do a lot better than a justification-free “this is bad”. Civil or not, I’d like to see that critique.
If you think the OP is harmful, by all means do not let civility stop you from posting a comment that may mitigate that harm! If you really believe what you’re saying, that comment may save lives. So let’s have it!
EDIT: Like Zack, I will strong-upvote this extended critique if you post.
[Not the origional poster, but I’ll give it a shot]
This argument seems to hinge mostly on if the majority of those expected to read this content end up being Less Wrong regulars or not—with the understanding that going viral e.g. reddit hug of death would drastically shift that distribution.
Even accepting everything in the post as true on it’s face it’s unlikely such info would take the CDC out of the top 5 sources of info on this for the average American, but it’s understandable people would come away with a different conclusion if lead here by some sensationalist clickbait headline and primed to do so. That entire line of argument is increadably speculative, but nessisarily so if viral inbound links up your readership two orders of magnitude. Harm and total readership would be very sensitive to the framing and virality of the referer. It’s maybe relevant to ask if content on this forum has gone viral previously and if so, to what degree it was helpful/harmful.
I’m not really decided one way or the other, but that private/member only post option sounds like a really good idea. It sounds like there’s some substance to this disagreement, but it also has a pascal’s mugging character to it that makes me very reluctant to endorse the “info hazard” claim. Harm reductions seems like a reasonable middle ground.
I think that the definition is completely clear. “Information hazards are risks that arise from the dissemination or the potential dissemination of true information that may cause harm or enable some agent to cause harm. Such hazards are often subtler than direct physical threats, and, as a consequence, are easily overlooked. ” This has nothing to do with existential risk.
If lower trust in the CDC will save lives, facts that reduce trust are not an infohazard, and if lower trust in the CDC will lead to more deaths, they are. So—GIVEN THAT THE FACTS ARE TRUE, the dispute seems to be about different predictive models, not confusion about what an infohazard is. Even then, the problem here is that the prediction itself is not sufficiently specific. Lower trust among what group, for example? Most Lesswrongers are unlikely to decide to oppose vaccines, for example, but there are people who read Lesswrong who do so.
But again, the claims were in some cases incorrect, they confuse the CDC with the Trump administration more broadly, and many are unreasonable post-hoc judgments about what the CDC should have done that I think make the CDC look worse than a reasonable observer should conclude.
+1