(10 hour time zone lags make conversations like this hard.)
My claim is not that it’s certainly true that this is bad, and should not have been said. I claim that there is a reasonable chance that it could be bad, and that for that reason alone, it should have been checked with people and discussed before being posted.
I also claim that the post is incorrect on its merits in several places, as I have responded elsewhere in the thread. BUT, as Bostrom notes in his paper, which people really need to read, infohazards aren’t a problem because they are false, they are a problem because they are damaging. So if I thought this post were entirely on point with its criticisms, I would have been far more muted in my response, but still have bemoaned the lack of judgement in not bothering to talk to people before posting it. But in that case, I might have agreed that while the infohazard concerns were real, they would be outweighed by truth seeking norms on LW. I’m not claiming that we need censorship of claims here, but we do need standards, and those standards should certainly include expecting people to carefully vet potential infohazards and avoid unilateralist curse issues before posting.
I want to be clear with you about my thoughts on this David. I’ve spent multiple hundreds of hours thinking about information hazards, publication norms, and how to avoid unilateralist action, and I regularly use those principles explicitly in decision-making. I’ve spent quite some time thinking about how to re-design LessWrong to allow for private discussion and vetting for issues that might lead to e.g. sharing insights that lead to advances in AI capabilities. But given all of that, on reflection, I still completely disagree that this post should be deleted, or that the authors were taking worrying unilateralist action, and I am happy to drop 10+ hours conversing with you about this.
Let me give my thoughts on the issue of infohazards.
I am honestly not sure what work you think the term is doing in this situation, so I’ll recap what it is for everyone following. In history, there has been a notion that all science is fundamentally good, that all knowledge is good, and that science need not ask ethical questions of its exploration. Much of Bostrom’s career has been to draw the boundaries of this idea and show where it is false. For example, one can build technologies that a civilization is not wise enough to use correctly, that lead to degradation of society and even extinction (you and I are both building our lives around increasing the wisdom of society so that we don’t go extinct). Bostrom’s infohazards paper is a philosophical exercise, asking at every level of organisation what kinds of information can hurt you. The paper itself has no conclusion, and ends with an exhortation toward freedom of speech, its point is simply to help you conceptualise this kind of thing and be able to notice in different domains. Then you can notice the tradeoff and weigh it properly in your decision-making.
So, calling something an infohazard merely means that it’s damaging information. An argument that has a false conclusion is an infohazard, because it might cause people to believe a false conclusion. Publishing private information is an infohazard, because it allows adversaries to attack you better, but we still often publish infohazardous private material because it contributes to the common good (e.g. listing our home address on public facebook events helps people burgle your house but it’s worth it to let friends find you). Now, the one kind of infohazard that there is consensus on in the x-risk community that focuses on biosecurity, is sharing specific technological designs for pathogens that could kill masses of people, or sharing information about system weaknesses that are presently subject to attack by adversaries (for obvious reasons I won’t give examples, but Davis Kingsley helpfully published an example that is no longer true in this post if anyone is interested), so I assume that this is what you are talking about, as I know of no other infohazard that there is a consensus about in the bio-x-risk space that one should take great pains to silence and punish defectors on.
The main reason Bostrom’s paper is brought up in biosecurity is in the context of arguing that the spread of specific technological designs for various pathogens and or damaging systems shouldn’t be published or sketched out in great detail. As Churchill was shocked by Niels Bohr’s plea to share the nuclear designs with the Russians, because it would lead to the end of all war (to which Churchill said no and wondered if Bohr was a Russian spy), it might be possible to have buildable pathogens that terrorists or warring states could use to hurt a lot of people or potentially cause an existential catastrophe. So it would be wise to (a) have careful publication practises that involve the option of not-publishing details of such biological systems and (b) not publicise how to discover such information.
Bostrom has put a lot of his reputation on this being a worrying problem that you need to understand carefully. If someone on LessWrong were sharing e.g. their best guess at how to design and build a pathogen that could kill 1%, 10% or possibly 100% of the world’s population, I would be in quite strong agreement that as an admin of the site I should preliminarily move the post back into their drafts, talk with the person, encourage them to think carefully about this, and connect them to people I know who’ve thought about this. I can imagine that the person has reasonable disagreements, but if it seemed like the person was actively indifferent to the idea that it might cause damage, then I can’t stop them writing anywhere on the internet, but LessWrong has very good SEO and I don’t want that to be widely accessible so it could easily be the right call to remove their content of this type from LessWrong. This seems sensible for the case of people posting mechanistic discussion of how to build pathogens that would be able to kill 1%+ of the population.
Now, you’re asking whether we should treat criticism of governmental institutions during a time of crisis in the same category that we treat someone posting pathogens designs or speculating on how to build pathogens that can kill 100 million people. We are discussing something very different, that has a fairly different set of intuitions.
Is there an argument here that is as strong as the argument that sharing pathogen designs can lead to an existential catastrophe? Let me list some reasons why this action is in fact quite useful.
Helping people inform themselves about the virus. As I am writing this message, I’m in a house meeting attempting to estimate the number of people in my area with the disease, and what levels of quarantine we need to be at and when we need to do other things (e.g. can we go to the grocery store, can we accept amazon packages, can we use Uber, etc). We’re trying to use various advice from places like the CDC and the WHO, and it’s helpful to know when I can just trust them to have done their homework versus taking them as helpful but that I should re-do their thinking with my own first-principles models in some detail.
Helping necessary institutional change happen. The coronavirus is not likely to be an existential catastrophe. I expect it will likely kill over 1 million people, but is exceedingly unlikely to kill a couple percent of the population, even given hospital overflow and failures of countries to quarantine. This isn’t the last hurrah from that perspective, and so a naive maxipok utilitarian calculus would say it is more important to improve the CDC for future existential biorisks rather than making sure to not hinder it in any way today. I think that standard policy advice is that stuff gets done quickly in crisis time, and I think that creating public, common knowledge of the severe inadequacies of our current institutions at this time, not ten years later when someone writes a historical analysis, but right now, is the time when improvements and changes are most likely to happen. I want the CDC to be better than this when it comes to future bio-x-risks, and now is a good time to very publicly state very clearly what it’s failing at.
Protecting open, scientific discourse. I’m always skeptical of advice to not publicly criticise powerful organisations because it might cause them to lose power. I always feel like, if their continued existence and power is threatened by honest and open discourse… then it’s weird to think that it’s me who’s defecting on them when I speak openly and honestly about them. I really don’t know what deal they thought they could make with me where I would silence myself (and every other free-thinking person who notices these things?). I’m afraid that was not a deal that was on offer, and they’re picking the wrong side. Open and honest discourse is always controversial and always necessary for a scientifically healthy culture.
So the counterargument here is that there is a downside strong enough possible here. Importantly, when Bostrom shows that information should be hidden and made secret because sharing it might lead to an existential catastrophe.
Could criticising the government here lead to an existential catastrophe?
I don’t know your position, but I’ll try to paint a picture, and let me know if this sounds right. I think you think that something like the following is a possibility. This post, or a successor like it, goes viral (virus based wordplay unintended) on twitter, leading to a consensus that the CDC is incompetent. Later on, the CDC recommends mass quarantine in the US, and the population follows the letter but not the spirit of the recommendation, and this means that many people break quarantine and die.
So that’s a severe outcome. But it isn’t an existential catastrophe.
(Is the coronavirus itself an existential catastrophe? As I said above, this doesn’t seem like it’s the case to me. Its death rate seems to be around 2% when given the proper medical treatment (respirators and the like), and so given hospital overload will likely be higher, perhaps 3-20% (depending on the variation in age of the population). My understanding is that it will likely peak at a maximum of 70% of any given highly connected population, and it’s worth remembering that much of humanity is spread out and not based in cities where people see each other all of the time.
I think the main world in which this is an existential catastrophe is the world where getting the disease does not confer immunity after you lose the disease. This means a constant cycle of the disease amongst the whole population, without being able to develop a vaccine. In that world, things are quite bad, and I’m not really sure what we’ll do then. That quickly moves me from “The next 12 months will see a lot of death and I’m probably going to be personally quarantined for 3-5 months and I will do work to ensure the rationality community and my family is safe and secured” to “This is the sole focus of my attention for the foreseeable future.”
Importantly, I don’t really see any clear argument for which way criticism of the CDC plays out in this world.)
And I know there are real stakes here. Even though you need to go against CDC recommendation today and stockpile, in the future the CDC will hopefully be encouraging mass quarantine, and if people ignore that advice then a fraction of them will die. But there are always life-and-death stakes to speaking honestly about failures of important institutions. Early GiveWell faced the exact same situation, criticising charities saving lives in developing countries. One can argue that this kills people by reducing funding for these important charities. But this was just worth a million times over it because we’ve coordinated around far more effective charities and saved way more lives. We need to discuss governmental failure here in order to save more lives in the future.
(Can I imagine taking down content about the coronavirus? Hm, I thought about it for a bit, and I can imagine that, if a country was under mass quarantine, if people were writing articles with advice about how to escape quarantine and meet people, that would be something we’d take down. There’s an example. But criticising the government? It’s like a fundamental human right, and not because it would be inconvenient to remove, but because it’s the only way to build public trust. It makes no sense to me to silence it.)
The reason you mustn’t silence discussion when we think the consequences are bad, is because the truth is powerful and has surprising consequences. Bostrom has argued that if it’s an existential risk, this principle no longer holds, but if you think he thinks this applies elsewhere, let me quote the end of his paper on infohazards.
Even if our best policy is to form an unyielding commitment to unlimited freedom of thought, virtually limitless freedom of speech, an extremely wide freedom of inquiry, we should realize not only that this policy has costs but that perhaps the strongest reason for adopting such an uncompromising stance would itself be based on an information hazard; namely, norm hazard: the risk that precious yet fragile norms of truth-seeking and truthful reporting would be jeopardized if we permitted convenient exceptions in our own adherence to them or if their violation were in general too readily excused.
Footnote on Unilateralism
I don’t see a reasonable argument that this was close to such a situation such that it’s a dangerous unilateralist action to write this. This isn’t a situation where 95% of people think it’s bad but 5% think it’s good.
If you want to know whether we’ve lifted the unilateralist’s curse here on LessWrong, you need look no further than the Petrov Day event that we ran, and see what the outcome was. That was indeed my attempt to help LessWrong practise and self-signal that we don’t take unilateralist action. But this case is neither an x-risk infohazard nor worrisome unilateralist action. It’s just two people doing their part in helping us draw an accurate map of the territory.
Have you considered whether your criticism itself may have been a damaging infohazard (e.g. in causing people to wrongly place trust in the CDC and thereby dying, in negatively reinforcing coronavirus model-building, in increasing the salience of the “infohazard” concept which can easily be used to illegitimately maintain a state of disinformation, in reinforcing authoritarianism in the US)? How many people did you consult before posting it? How carefully did you vet it?
If you don’t think the reasons I mentioned are good reasons to strongly vet it before posting, why not?
I have discussed the exact issue of public trust in institutions during pandemics with experts in this area repeatedly in the past.
There are risks in increasing the salience of infohazards, and I’ve talked about this point as well. The consensus in both the biosecurity world, and in EA in general, is that infohazards are underappreciated relative to the ideal, and should be made more salient. I’ve also discussed the issues with disinformation with experts in that area, and it’s very hard to claim that people in general are currently too trusting of government authority in the United States—and the application to LW specifically makes me think that people here are less inclined to trust government than the general public, though it’s probably more justifiable. But again, the protest isn’t about just die-hard lesswrongers reading the post, it’s about the risks.
But aside from that, I think there is no case to be made that the criticisms that I noted are off-base on the object-level are infohazards. Pointing out that the CDC isn’t in charge of the FDA’s decision, or pointing out that the CDC distributed tests *too quickly* and had an issue which they corrected hardly seems problematic.
The consensus in both the biosecurity world, and in EA in general, is that infohazards are underappreciated relative to the ideal, and should be made more salient.
Note that I pretty strongly disagree with this. I really wish people would talk less about infohazards, in particular when people talk about reputational risks. My sense is that a quite significant fraction of EAs share this assessment, so calling it consensus seems quite misleading.
I’ve also discussed the issues with disinformation with experts in that area, and it’s very hard to claim that people in general are currently too trusting of government authority in the United States
I also disagree with this. My sense is that on average people are far too trusting of government authority, and much less trust would probably improve things, though it obviously depends on the details of what kind of trust. Trust in the rule of law is very useful. Trust in the economic policies of the united states, or its ability to do long-term planning appears widespread and usually quite misplaced. I don’t think your position is unreasonable to hold, but calling its negation “very hard to claim” seems wrong to me, since again many people I think we both trust a good amount disagree with your position.
For point one, I agree that for reputation discussions, infohazards are probably overused, and I used it that way here. I should probably have been clearer about this in my own head, as I was incorrectly lumping infohazards together. In retrospect I regret bringing this up, rather than focusing on the fact that I think the post was misleading in a variety of ways on the object level.
For point two, I also think you are correct that there is not much consensus in some domains—when I say they are clearly not trusting enough, I should have explicitly (instead of implicitly) made my claim about public health. So in economics, governance, legislation, and other places, people are arguably too trusting overall—not obviously, but at least arguably. The other side is that most people who aren’t trusting of government in those areas are far too overconfident in crazy pet theories (gold standard, monarchy, restructuring courts, etc.) compared to what government espouses—just as they are in public health. So I’m skeptical of the argument that lower trust in general, or more assumptions that the government is generically probably screwing up in a given domain, would actually be helpful.
Cool, then I think we mostly agree on these points.
I do want to say that I am very grateful about your object-level contributions to this thread. I think we can probably get to a stage where we have a version of the top-level post that we are both happy with, at least in terms of its object-level claims.
Thanks for answering. It sounds like, while you have discussed general points with others, you have not vetted this particular criticism. Is there a reason you think a higher standard should be applied to the original post?
In large part, I think there needs to be a higher standard for the original post because it got so many things wrong. And at this point, I’ve discussed this specific post, and had my judgement confirmed three times by different people in this area who don’t want to be involved. But also see my response to Oliver below where I discuss where I think I was wrong.
(10 hour time zone lags make conversations like this hard.)
My claim is not that it’s certainly true that this is bad, and should not have been said. I claim that there is a reasonable chance that it could be bad, and that for that reason alone, it should have been checked with people and discussed before being posted.
I also claim that the post is incorrect on its merits in several places, as I have responded elsewhere in the thread. BUT, as Bostrom notes in his paper, which people really need to read, infohazards aren’t a problem because they are false, they are a problem because they are damaging. So if I thought this post were entirely on point with its criticisms, I would have been far more muted in my response, but still have bemoaned the lack of judgement in not bothering to talk to people before posting it. But in that case, I might have agreed that while the infohazard concerns were real, they would be outweighed by truth seeking norms on LW. I’m not claiming that we need censorship of claims here, but we do need standards, and those standards should certainly include expecting people to carefully vet potential infohazards and avoid unilateralist curse issues before posting.
I want to be clear with you about my thoughts on this David. I’ve spent multiple hundreds of hours thinking about information hazards, publication norms, and how to avoid unilateralist action, and I regularly use those principles explicitly in decision-making. I’ve spent quite some time thinking about how to re-design LessWrong to allow for private discussion and vetting for issues that might lead to e.g. sharing insights that lead to advances in AI capabilities. But given all of that, on reflection, I still completely disagree that this post should be deleted, or that the authors were taking worrying unilateralist action, and I am happy to drop 10+ hours conversing with you about this.
Let me give my thoughts on the issue of infohazards.
I am honestly not sure what work you think the term is doing in this situation, so I’ll recap what it is for everyone following. In history, there has been a notion that all science is fundamentally good, that all knowledge is good, and that science need not ask ethical questions of its exploration. Much of Bostrom’s career has been to draw the boundaries of this idea and show where it is false. For example, one can build technologies that a civilization is not wise enough to use correctly, that lead to degradation of society and even extinction (you and I are both building our lives around increasing the wisdom of society so that we don’t go extinct). Bostrom’s infohazards paper is a philosophical exercise, asking at every level of organisation what kinds of information can hurt you. The paper itself has no conclusion, and ends with an exhortation toward freedom of speech, its point is simply to help you conceptualise this kind of thing and be able to notice in different domains. Then you can notice the tradeoff and weigh it properly in your decision-making.
So, calling something an infohazard merely means that it’s damaging information. An argument that has a false conclusion is an infohazard, because it might cause people to believe a false conclusion. Publishing private information is an infohazard, because it allows adversaries to attack you better, but we still often publish infohazardous private material because it contributes to the common good (e.g. listing our home address on public facebook events helps people burgle your house but it’s worth it to let friends find you). Now, the one kind of infohazard that there is consensus on in the x-risk community that focuses on biosecurity, is sharing specific technological designs for pathogens that could kill masses of people, or sharing information about system weaknesses that are presently subject to attack by adversaries (for obvious reasons I won’t give examples, but Davis Kingsley helpfully published an example that is no longer true in this post if anyone is interested), so I assume that this is what you are talking about, as I know of no other infohazard that there is a consensus about in the bio-x-risk space that one should take great pains to silence and punish defectors on.
The main reason Bostrom’s paper is brought up in biosecurity is in the context of arguing that the spread of specific technological designs for various pathogens and or damaging systems shouldn’t be published or sketched out in great detail. As Churchill was shocked by Niels Bohr’s plea to share the nuclear designs with the Russians, because it would lead to the end of all war (to which Churchill said no and wondered if Bohr was a Russian spy), it might be possible to have buildable pathogens that terrorists or warring states could use to hurt a lot of people or potentially cause an existential catastrophe. So it would be wise to (a) have careful publication practises that involve the option of not-publishing details of such biological systems and (b) not publicise how to discover such information.
Bostrom has put a lot of his reputation on this being a worrying problem that you need to understand carefully. If someone on LessWrong were sharing e.g. their best guess at how to design and build a pathogen that could kill 1%, 10% or possibly 100% of the world’s population, I would be in quite strong agreement that as an admin of the site I should preliminarily move the post back into their drafts, talk with the person, encourage them to think carefully about this, and connect them to people I know who’ve thought about this. I can imagine that the person has reasonable disagreements, but if it seemed like the person was actively indifferent to the idea that it might cause damage, then I can’t stop them writing anywhere on the internet, but LessWrong has very good SEO and I don’t want that to be widely accessible so it could easily be the right call to remove their content of this type from LessWrong. This seems sensible for the case of people posting mechanistic discussion of how to build pathogens that would be able to kill 1%+ of the population.
Now, you’re asking whether we should treat criticism of governmental institutions during a time of crisis in the same category that we treat someone posting pathogens designs or speculating on how to build pathogens that can kill 100 million people. We are discussing something very different, that has a fairly different set of intuitions.
Is there an argument here that is as strong as the argument that sharing pathogen designs can lead to an existential catastrophe? Let me list some reasons why this action is in fact quite useful.
Helping people inform themselves about the virus. As I am writing this message, I’m in a house meeting attempting to estimate the number of people in my area with the disease, and what levels of quarantine we need to be at and when we need to do other things (e.g. can we go to the grocery store, can we accept amazon packages, can we use Uber, etc). We’re trying to use various advice from places like the CDC and the WHO, and it’s helpful to know when I can just trust them to have done their homework versus taking them as helpful but that I should re-do their thinking with my own first-principles models in some detail.
Helping necessary institutional change happen. The coronavirus is not likely to be an existential catastrophe. I expect it will likely kill over 1 million people, but is exceedingly unlikely to kill a couple percent of the population, even given hospital overflow and failures of countries to quarantine. This isn’t the last hurrah from that perspective, and so a naive maxipok utilitarian calculus would say it is more important to improve the CDC for future existential biorisks rather than making sure to not hinder it in any way today. I think that standard policy advice is that stuff gets done quickly in crisis time, and I think that creating public, common knowledge of the severe inadequacies of our current institutions at this time, not ten years later when someone writes a historical analysis, but right now, is the time when improvements and changes are most likely to happen. I want the CDC to be better than this when it comes to future bio-x-risks, and now is a good time to very publicly state very clearly what it’s failing at.
Protecting open, scientific discourse. I’m always skeptical of advice to not publicly criticise powerful organisations because it might cause them to lose power. I always feel like, if their continued existence and power is threatened by honest and open discourse… then it’s weird to think that it’s me who’s defecting on them when I speak openly and honestly about them. I really don’t know what deal they thought they could make with me where I would silence myself (and every other free-thinking person who notices these things?). I’m afraid that was not a deal that was on offer, and they’re picking the wrong side. Open and honest discourse is always controversial and always necessary for a scientifically healthy culture.
So the counterargument here is that there is a downside strong enough possible here. Importantly, when Bostrom shows that information should be hidden and made secret because sharing it might lead to an existential catastrophe.
Could criticising the government here lead to an existential catastrophe?
I don’t know your position, but I’ll try to paint a picture, and let me know if this sounds right. I think you think that something like the following is a possibility. This post, or a successor like it, goes viral (virus based wordplay unintended) on twitter, leading to a consensus that the CDC is incompetent. Later on, the CDC recommends mass quarantine in the US, and the population follows the letter but not the spirit of the recommendation, and this means that many people break quarantine and die.
So that’s a severe outcome. But it isn’t an existential catastrophe.
(Is the coronavirus itself an existential catastrophe? As I said above, this doesn’t seem like it’s the case to me. Its death rate seems to be around 2% when given the proper medical treatment (respirators and the like), and so given hospital overload will likely be higher, perhaps 3-20% (depending on the variation in age of the population). My understanding is that it will likely peak at a maximum of 70% of any given highly connected population, and it’s worth remembering that much of humanity is spread out and not based in cities where people see each other all of the time.
I think the main world in which this is an existential catastrophe is the world where getting the disease does not confer immunity after you lose the disease. This means a constant cycle of the disease amongst the whole population, without being able to develop a vaccine. In that world, things are quite bad, and I’m not really sure what we’ll do then. That quickly moves me from “The next 12 months will see a lot of death and I’m probably going to be personally quarantined for 3-5 months and I will do work to ensure the rationality community and my family is safe and secured” to “This is the sole focus of my attention for the foreseeable future.”
Importantly, I don’t really see any clear argument for which way criticism of the CDC plays out in this world.)
And I know there are real stakes here. Even though you need to go against CDC recommendation today and stockpile, in the future the CDC will hopefully be encouraging mass quarantine, and if people ignore that advice then a fraction of them will die. But there are always life-and-death stakes to speaking honestly about failures of important institutions. Early GiveWell faced the exact same situation, criticising charities saving lives in developing countries. One can argue that this kills people by reducing funding for these important charities. But this was just worth a million times over it because we’ve coordinated around far more effective charities and saved way more lives. We need to discuss governmental failure here in order to save more lives in the future.
(Can I imagine taking down content about the coronavirus? Hm, I thought about it for a bit, and I can imagine that, if a country was under mass quarantine, if people were writing articles with advice about how to escape quarantine and meet people, that would be something we’d take down. There’s an example. But criticising the government? It’s like a fundamental human right, and not because it would be inconvenient to remove, but because it’s the only way to build public trust. It makes no sense to me to silence it.)
The reason you mustn’t silence discussion when we think the consequences are bad, is because the truth is powerful and has surprising consequences. Bostrom has argued that if it’s an existential risk, this principle no longer holds, but if you think he thinks this applies elsewhere, let me quote the end of his paper on infohazards.
Footnote on Unilateralism
I don’t see a reasonable argument that this was close to such a situation such that it’s a dangerous unilateralist action to write this. This isn’t a situation where 95% of people think it’s bad but 5% think it’s good.
If you want to know whether we’ve lifted the unilateralist’s curse here on LessWrong, you need look no further than the Petrov Day event that we ran, and see what the outcome was. That was indeed my attempt to help LessWrong practise and self-signal that we don’t take unilateralist action. But this case is neither an x-risk infohazard nor worrisome unilateralist action. It’s just two people doing their part in helping us draw an accurate map of the territory.
Have you considered whether your criticism itself may have been a damaging infohazard (e.g. in causing people to wrongly place trust in the CDC and thereby dying, in negatively reinforcing coronavirus model-building, in increasing the salience of the “infohazard” concept which can easily be used to illegitimately maintain a state of disinformation, in reinforcing authoritarianism in the US)? How many people did you consult before posting it? How carefully did you vet it?
If you don’t think the reasons I mentioned are good reasons to strongly vet it before posting, why not?
I have discussed the exact issue of public trust in institutions during pandemics with experts in this area repeatedly in the past.
There are risks in increasing the salience of infohazards, and I’ve talked about this point as well. The consensus in both the biosecurity world, and in EA in general, is that infohazards are underappreciated relative to the ideal, and should be made more salient. I’ve also discussed the issues with disinformation with experts in that area, and it’s very hard to claim that people in general are currently too trusting of government authority in the United States—and the application to LW specifically makes me think that people here are less inclined to trust government than the general public, though it’s probably more justifiable. But again, the protest isn’t about just die-hard lesswrongers reading the post, it’s about the risks.
But aside from that, I think there is no case to be made that the criticisms that I noted are off-base on the object-level are infohazards. Pointing out that the CDC isn’t in charge of the FDA’s decision, or pointing out that the CDC distributed tests *too quickly* and had an issue which they corrected hardly seems problematic.
Note that I pretty strongly disagree with this. I really wish people would talk less about infohazards, in particular when people talk about reputational risks. My sense is that a quite significant fraction of EAs share this assessment, so calling it consensus seems quite misleading.
I also disagree with this. My sense is that on average people are far too trusting of government authority, and much less trust would probably improve things, though it obviously depends on the details of what kind of trust. Trust in the rule of law is very useful. Trust in the economic policies of the united states, or its ability to do long-term planning appears widespread and usually quite misplaced. I don’t think your position is unreasonable to hold, but calling its negation “very hard to claim” seems wrong to me, since again many people I think we both trust a good amount disagree with your position.
For point one, I agree that for reputation discussions, infohazards are probably overused, and I used it that way here. I should probably have been clearer about this in my own head, as I was incorrectly lumping infohazards together. In retrospect I regret bringing this up, rather than focusing on the fact that I think the post was misleading in a variety of ways on the object level.
For point two, I also think you are correct that there is not much consensus in some domains—when I say they are clearly not trusting enough, I should have explicitly (instead of implicitly) made my claim about public health. So in economics, governance, legislation, and other places, people are arguably too trusting overall—not obviously, but at least arguably. The other side is that most people who aren’t trusting of government in those areas are far too overconfident in crazy pet theories (gold standard, monarchy, restructuring courts, etc.) compared to what government espouses—just as they are in public health. So I’m skeptical of the argument that lower trust in general, or more assumptions that the government is generically probably screwing up in a given domain, would actually be helpful.
Cool, then I think we mostly agree on these points.
I do want to say that I am very grateful about your object-level contributions to this thread. I think we can probably get to a stage where we have a version of the top-level post that we are both happy with, at least in terms of its object-level claims.
Thanks for answering. It sounds like, while you have discussed general points with others, you have not vetted this particular criticism. Is there a reason you think a higher standard should be applied to the original post?
In large part, I think there needs to be a higher standard for the original post because it got so many things wrong. And at this point, I’ve discussed this specific post, and had my judgement confirmed three times by different people in this area who don’t want to be involved. But also see my response to Oliver below where I discuss where I think I was wrong.