This post is a bad idea and it would be better if it were taken down. It’s “penny-wise, pound-foolish” applied to epistemology and I would be utterly shocked if this post had a net positive effect.
I wrote a big critique outlining why I think it’s bad, but I couldn’t keep it civil and don’t want to spend another hour editing it to be, so I’ll keep it brief and to the point: lesswrong has been a great source of info and discussion on COVID-19 in the past couple of weeks, much better than most mainstream sources, but as usual, I don’t recommend the site to friends or family because I know posts like this always pop up and I don’t want to expose people to this obvious info hazard or be put in the position of defending why I recommended a community that posts info hazards like this.
As a mostly-lurker, I’m really just raising my hand here and saying “posts like this make me extremely uncomfortable and unwilling to recommend this community to others.” Obviously not everyone wants this community to become mainstream and I’m really not trying to make anyone feel bad, but I think it’s worth mentioning since other than David Manheim, I don’t see my opinion represented in the comments yet and it looks like it’s a minority one.
(Obviously it’s up to the author whether or not to remove the post—I’m not requesting anything here, just expressing my preferences.)
I wish people would stop throwing this term around willy-nilly.
Not only is it not obvious to me that this post is an “info hazard”, but I don’t really know what you even mean by it. Is it the definition used in this recent Less Wrong post[1], or perhaps the one quoted in this Less Wrong Wiki entry[2]?
In any case, the OP seems to be presenting true (as far as I can tell) and useful (potentially life-saving, in fact!) information. If you’re going to casually drop labels like “infohazard” in reference to it, you ought to do a lot better than a justification-free “this is bad”. Civil or not, I’d like to see that critique.
If you think the OP is harmful, by all means do not let civility stop you from posting a comment that may mitigate that harm! If you really believe what you’re saying, that comment may save lives. So let’s have it!
EDIT:Like Zack, I will strong-upvote this extended critique if you post.
TL;DR: “Infohazard” means any kind of information that could be harmful in some fashion. Let’s use “cognitohazard” to describe information that could specifically harm the person who knows it.
An information hazard is a concept coined by Nick Bostrom in a 2011 paper[1] for Review of Contemporary Philosophy. He defines it as follows; “Information hazard: A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.”
[Not the origional poster, but I’ll give it a shot]
This argument seems to hinge mostly on if the majority of those expected to read this content end up being Less Wrong regulars or not—with the understanding that going viral e.g. reddit hug of death would drastically shift that distribution.
Even accepting everything in the post as true on it’s face it’s unlikely such info would take the CDC out of the top 5 sources of info on this for the average American, but it’s understandable people would come away with a different conclusion if lead here by some sensationalist clickbait headline and primed to do so. That entire line of argument is increadably speculative, but nessisarily so if viral inbound links up your readership two orders of magnitude. Harm and total readership would be very sensitive to the framing and virality of the referer. It’s maybe relevant to ask if content on this forum has gone viral previously and if so, to what degree it was helpful/harmful.
I’m not really decided one way or the other, but that private/member only post option sounds like a really good idea. It sounds like there’s some substance to this disagreement, but it also has a pascal’s mugging character to it that makes me very reluctant to endorse the “info hazard” claim. Harm reductions seems like a reasonable middle ground.
I think that the definition is completely clear. “Information hazards are risks that arise from the dissemination or the potential dissemination of true information that may cause harm or enable some agent to cause harm. Such hazards are often subtler than direct physical threats, and, as a consequence, are easily overlooked. ” This has nothing to do with existential risk.
If lower trust in the CDC will save lives, facts that reduce trust are not an infohazard, and if lower trust in the CDC will lead to more deaths, they are. So—GIVEN THAT THE FACTS ARE TRUE, the dispute seems to be about different predictive models, not confusion about what an infohazard is. Even then, the problem here is that the prediction itself is not sufficiently specific. Lower trust among what group, for example? Most Lesswrongers are unlikely to decide to oppose vaccines, for example, but there are people who read Lesswrong who do so.
But again, the claims were in some cases incorrect, they confuse the CDC with the Trump administration more broadly, and many are unreasonable post-hoc judgments about what the CDC should have done that I think make the CDC look worse than a reasonable observer should conclude.
I would be utterly shocked if this post had a net positive effect.
To be clear, I expect this post to be directly positive for my friends and family to read. From my vantage point CDC is recommending severe underpreparation, and I need my mum, dad, and their families and friends to stop listening to the CDC and go and prepare for several months of self-imposed quarantine. I’ve fortunately gotten my mum to stockpile food, but I expect my dad will be harder, and I am glad I have a resource showing that the CDC is not to be trusted over simple math and common sense.
I am aware that losing institutional trust means coordination problems, but if your institution lies you mustn’t prop it up anyway just because it has power. Through misleading and dangerous advice, they’re forcing our hand here when we try to have an honest conversation, not the other way around.
(You said you didn’t want to spend hours on this, so please don’t feel obliged to reply, I just wanted to reply to the thing you said that seemed false in my personal experience.)
I wrote a big critique outlining why I think it’s bad, but I couldn’t keep it civil and don’t want to spend another hour editing it to be
If you post it anyway (maybe a top-level post for visibility?), I’ll strong-upvote it. I vehemently disagree with you, but even more vehemently than that, I disagree with allowing this class of expense to conceal potentially-useful information, like big critiques. (As it is written of the fifth virtue, “Those who wish to fail must first prevent their friends from helping them.”)
I’m really not trying to make anyone feel bad
Shouldn’t you? If the OP is actually harmful, maybe the authors should feel bad for causing harm! Then the memory of that feeling might stop them from causing analogous harms in analogous future situations. That’s what feelings are for, evolutionarily speaking.
Personally, I disapprove of this entire class of appeals-to-consequences (simpler to just say clearly what you have to say, without trying to optimize how other people will feel about it), but if you find “This post makes the community harder to defend, which is bad” compelling, I don’t see why you wouldn’t also accept “Making the authors feel bad would make the community easier to defend (in expectation), which is good”.
If you post it anyway (maybe a top-level post for visibility?), I’ll strong-upvote it. I vehemently disagree with you, but even more vehemently than that, I disagree with allowing this class of expense to conceal potentially-useful information, like big critiques.
I think you’re ignoring the harms from posting something uncivil. Civility is an extremely important norm. I would not support something that is directly insulting, even if it is an important critique.
However, I did strong-upvote this comment (meaning sirjackholland’s comment on this post) and I applaud them both for not publishing their original critique and for expressing their position anyway.
I don’t recommend the site to friends or family because I know posts like this always pop up and I don’t want to expose people to this...
This is just basically correct! Good job! :-)
Arguably, most thoughts that most humans have are either original or good but not both. People seriously attempting to have good, original, pragmatically relevant thoughts about nearly any topic normally just shoot themselves in the foot. This has been discussedadnauseum.
This place is not good for cognitive children, and indeed it MIGHT not be good for ANYONE! It could be that “speech to persuade” is simply a cultural and biological adaptation of the brain which primarily exists to allow people to trick other people into giving them more resources, and the rest is just a spandrel at best.
It is admirable that you have restrained yourself from spreading links to this website to people you care about and you should continue this practice in the future. One experiment per family is probably more than enough.
--
HOWEVER, also, you should not try to regulate speech here so that it is safe for dumb people without the ability to calculate probabilities, detect irony, doubt things they read, or otherwise tolerate cognitive “ickiness” that may adhere to various ideas not normally explored or taught.
There is a possibility that original thinking is valuable, and it is possible that developing the capacity for such thinking through the consideration of complex topics is also valuable. This site presupposes the value of such cognitive experimentation, and then follows that impulse to whatever conclusions it leads to.
Regulating speech here to a level so low as to be “safe for anyone to be exposed to” would basically defeat the point of the site.
This post is a bad idea and it would be better if it were taken down. It’s “penny-wise, pound-foolish” applied to epistemology and I would be utterly shocked if this post had a net positive effect.
I wrote a big critique outlining why I think it’s bad, but I couldn’t keep it civil and don’t want to spend another hour editing it to be, so I’ll keep it brief and to the point: lesswrong has been a great source of info and discussion on COVID-19 in the past couple of weeks, much better than most mainstream sources, but as usual, I don’t recommend the site to friends or family because I know posts like this always pop up and I don’t want to expose people to this obvious info hazard or be put in the position of defending why I recommended a community that posts info hazards like this.
As a mostly-lurker, I’m really just raising my hand here and saying “posts like this make me extremely uncomfortable and unwilling to recommend this community to others.” Obviously not everyone wants this community to become mainstream and I’m really not trying to make anyone feel bad, but I think it’s worth mentioning since other than David Manheim, I don’t see my opinion represented in the comments yet and it looks like it’s a minority one.
(Obviously it’s up to the author whether or not to remove the post—I’m not requesting anything here, just expressing my preferences.)
I wish people would stop throwing this term around willy-nilly.
Not only is it not obvious to me that this post is an “info hazard”, but I don’t really know what you even mean by it. Is it the definition used in this recent Less Wrong post[1], or perhaps the one quoted in this Less Wrong Wiki entry[2]?
In any case, the OP seems to be presenting true (as far as I can tell) and useful (potentially life-saving, in fact!) information. If you’re going to casually drop labels like “infohazard” in reference to it, you ought to do a lot better than a justification-free “this is bad”. Civil or not, I’d like to see that critique.
If you think the OP is harmful, by all means do not let civility stop you from posting a comment that may mitigate that harm! If you really believe what you’re saying, that comment may save lives. So let’s have it!
EDIT: Like Zack, I will strong-upvote this extended critique if you post.
[Not the origional poster, but I’ll give it a shot]
This argument seems to hinge mostly on if the majority of those expected to read this content end up being Less Wrong regulars or not—with the understanding that going viral e.g. reddit hug of death would drastically shift that distribution.
Even accepting everything in the post as true on it’s face it’s unlikely such info would take the CDC out of the top 5 sources of info on this for the average American, but it’s understandable people would come away with a different conclusion if lead here by some sensationalist clickbait headline and primed to do so. That entire line of argument is increadably speculative, but nessisarily so if viral inbound links up your readership two orders of magnitude. Harm and total readership would be very sensitive to the framing and virality of the referer. It’s maybe relevant to ask if content on this forum has gone viral previously and if so, to what degree it was helpful/harmful.
I’m not really decided one way or the other, but that private/member only post option sounds like a really good idea. It sounds like there’s some substance to this disagreement, but it also has a pascal’s mugging character to it that makes me very reluctant to endorse the “info hazard” claim. Harm reductions seems like a reasonable middle ground.
I think that the definition is completely clear. “Information hazards are risks that arise from the dissemination or the potential dissemination of true information that may cause harm or enable some agent to cause harm. Such hazards are often subtler than direct physical threats, and, as a consequence, are easily overlooked. ” This has nothing to do with existential risk.
If lower trust in the CDC will save lives, facts that reduce trust are not an infohazard, and if lower trust in the CDC will lead to more deaths, they are. So—GIVEN THAT THE FACTS ARE TRUE, the dispute seems to be about different predictive models, not confusion about what an infohazard is. Even then, the problem here is that the prediction itself is not sufficiently specific. Lower trust among what group, for example? Most Lesswrongers are unlikely to decide to oppose vaccines, for example, but there are people who read Lesswrong who do so.
But again, the claims were in some cases incorrect, they confuse the CDC with the Trump administration more broadly, and many are unreasonable post-hoc judgments about what the CDC should have done that I think make the CDC look worse than a reasonable observer should conclude.
+1
To be clear, I expect this post to be directly positive for my friends and family to read. From my vantage point CDC is recommending severe underpreparation, and I need my mum, dad, and their families and friends to stop listening to the CDC and go and prepare for several months of self-imposed quarantine. I’ve fortunately gotten my mum to stockpile food, but I expect my dad will be harder, and I am glad I have a resource showing that the CDC is not to be trusted over simple math and common sense.
I am aware that losing institutional trust means coordination problems, but if your institution lies you mustn’t prop it up anyway just because it has power. Through misleading and dangerous advice, they’re forcing our hand here when we try to have an honest conversation, not the other way around.
(You said you didn’t want to spend hours on this, so please don’t feel obliged to reply, I just wanted to reply to the thing you said that seemed false in my personal experience.)
If you post it anyway (maybe a top-level post for visibility?), I’ll strong-upvote it. I vehemently disagree with you, but even more vehemently than that, I disagree with allowing this class of expense to conceal potentially-useful information, like big critiques. (As it is written of the fifth virtue, “Those who wish to fail must first prevent their friends from helping them.”)
Shouldn’t you? If the OP is actually harmful, maybe the authors should feel bad for causing harm! Then the memory of that feeling might stop them from causing analogous harms in analogous future situations. That’s what feelings are for, evolutionarily speaking.
Personally, I disapprove of this entire class of appeals-to-consequences (simpler to just say clearly what you have to say, without trying to optimize how other people will feel about it), but if you find “This post makes the community harder to defend, which is bad” compelling, I don’t see why you wouldn’t also accept “Making the authors feel bad would make the community easier to defend (in expectation), which is good”.
I think you’re ignoring the harms from posting something uncivil. Civility is an extremely important norm. I would not support something that is directly insulting, even if it is an important critique.
However, I did strong-upvote this comment (meaning sirjackholland’s comment on this post) and I applaud them both for not publishing their original critique and for expressing their position anyway.
This is just basically correct! Good job! :-)
Arguably, most thoughts that most humans have are either original or good but not both. People seriously attempting to have good, original, pragmatically relevant thoughts about nearly any topic normally just shoot themselves in the foot. This has been discussed ad nauseum.
This place is not good for cognitive children, and indeed it MIGHT not be good for ANYONE! It could be that “speech to persuade” is simply a cultural and biological adaptation of the brain which primarily exists to allow people to trick other people into giving them more resources, and the rest is just a spandrel at best.
It is admirable that you have restrained yourself from spreading links to this website to people you care about and you should continue this practice in the future. One experiment per family is probably more than enough.
--
HOWEVER, also, you should not try to regulate speech here so that it is safe for dumb people without the ability to calculate probabilities, detect irony, doubt things they read, or otherwise tolerate cognitive “ickiness” that may adhere to various ideas not normally explored or taught.
There is a possibility that original thinking is valuable, and it is possible that developing the capacity for such thinking through the consideration of complex topics is also valuable. This site presupposes the value of such cognitive experimentation, and then follows that impulse to whatever conclusions it leads to.
Regulating speech here to a level so low as to be “safe for anyone to be exposed to” would basically defeat the point of the site.