Thanks for writing this. How best to manage hazardous information is fraught, and although I have some work in draft and under review, much remains unclear—as you say, almost anything could have some some downside risk, and never discussing anything seems a poor approach.
Yet I strongly disagree with the conclusion that the default should be to discuss potentially hazardous (but non-technical) information publicly, and I think your proposals of how to manage these dangers (e.g. talk to one scientist first) generally err too lax. I provide the substance of this disagreement in a child comment.
I’d strongly endorse a heuristic along the lines of, “Try to avoid coming up with (and don’t publish) things which are novel and potentially dangerous”, with the standard of novelty being a relatively uninformed bad actor rather than an expert (e.g. highlighting/elaborating something dangerous which can be found buried in the scientific literature should be avoided).
This expressly includes more general information as well as particular technical points (e.g. “No one seems to be talking about technology X, but here’s why it has really dangerous misuse potential” would ‘count’, even if a particular ‘worked example’ wasn’t included).
I agree it would be good to have direct channels of communication for people considering things like this to get advice on whether projects they have in mind are wise to pursue, and to communicate concerns they have without feeling they need to resort to internet broadcast (cf. Jan Kulveit’s remark).
To these ends, people with concerns/questions of this nature are warmly welcomed and encouraged to contact me to arrange further discussion.
Thanks for this and subsequent comment which generally helped me to update my views on the problem and become even more cautious in discussing things.
Some thoughts appeared in my mind while reading, maybe I will have more thoughts later:
1. It looks like that all the talk about infohazards could be boiled down to just one thesis: “biorisk is much more serious x-risk than AI safety, but we decided not to acknowledge it, as it could be harmful”.
2. Almost all work in AI safety is based on “red-teaming”: someone comes with an idea X how to make AI safe, and EY appears and say “Actually, this will spectacularly fail because...”. However, the fact that future AI may read that thread of comments and act accordingly to the red-team advise is not considered, because AI is assumed superintelligent and able to create all our ideas from scratch.
3. The idea of infohazards is based on the idea of intellectual advantage of “EA people” over “bad people” when even an arm-chaired futurist could create a dozen ideas how to destroy the world, while professional scientists of some rogue county are sitting completely clueless and have to go to obscure forums for search of inspiration. From the outside point of view, this could look like arrogance. But it could also be interpreted that, in fact, we live in the world where it is very easy to create plausible ways of its destruction, which contributes to the idea of oversaturation of infohazards.
4. People, who study x-risks are most dangerous people in the world as they actually know how to destroy the world. More over, if a “bad agent” ever appear, he is more likely to be some deranged LW-commentator than North Korean officer.
0: We agree potentially hazardous information should only be disclosed (or potentially discovered) when the benefits of disclosure (or discovery) outweigh the downsides. Heuristics can make principles concrete, and a rule of thumb I try to follow is to have a clear objective in mind for gathering or disclosing such information (and being wary of vague justifications like ‘improving background knowledge’ or ‘better epistemic commons’) and incur the least possible information hazard in achieving this.
A further heuristic which seems right to me is one should disclose information in the way that maximally disadvantages bad actors versus good ones. There are a wide spectrum of approaches that could be taken that lie between ‘try to forget about it’, and ‘broadcast publicly’, and I think one of the intermediate options is often best.
1: I disagree with many of the considerations which push towards more open disclosure and discussion.
1.1: I don’t think we should be confident there is little downside in disclosing dangers a sophisticated bad actor would likely rediscover themselves. Not all plausible bad actors are sophisticated: a typical criminal or terrorist is no mastermind, and so may not make (to us) relatively straightforward insights, but could still ‘pick them up’ from elsewhere.
1.2: Although a big fan of epistemic modesty (and generally a detractor of ‘EA exceptionalism’), EAs do have an impressive track record in coming up with novel and important ideas. So there is some chance of coming up with something novel and dangerous even without exceptional effort.
1.3: I emphatically disagree we are at ‘infohazard saturation’ where the situation re. Infohazards ‘can’t get any worse’. I also find it unfathomable ever being confident enough in this claim to base strategy upon its assumption (cf. eukaryote’s comment).
1.4: There are some benefits to getting out ‘in front’ of more reckless disclosure by someone else. Yet in cases where one wouldn’t want to disclose it oneself, delaying the downsides of wide disclosure as long as possible seems usually more important, and so rules against bringing this to an end by disclosing yourself save in (rare) cases one knows disclosure is imminent rather than merely possible.
2: I don’t think there’s a neat distinction between ‘technical dangerous information’ and ‘broader ideas about possible risks’, with the latter being generally safe to publicise and discuss.
2.1: It seems easy to imagine cases where the general idea comprises most of the danger. The conceptual step to a ‘key insight’ of how something could be dangerously misused ‘in principle’ might be much harder to make than subsequent steps from this insight to realising this danger ‘in practice’. In such cases the insight is the key bottleneck for bad actors traversing the risk pipeline, and so comprises a major information hazard.
2.2: For similar reasons, highlighting a neglected-by-public-discussion part of the risk landscape where one suspects information hazards lie has a considerable downside, as increased attention could prompt investigation which brings these currently dormant hazards to light.
3: Even if I take the downside risks as weightier than you, one still needs to weigh these against the benefits. I take the benefit of ‘general (or public) disclosure’ to have little marginal benefit above more limited disclosure targeted to key stakeholders. As the latter approach greatly reduces the downside risks, this is usually the better strategy by the lights of cost/benefit. At least trying targeted disclosure first seems a robustly better strategy than skipping straight to public discussion (cf.).
3.1: In bio (and I think elsewhere) the set of people who are relevant setting strategy and otherwise contributing to reducing a given risk is usually small and known (e.g. particular academics, parts of the government, civil society, and so on). A particular scientist unwittingly performing research with misuse potential might need to know the risks of their work (likewise some relevant policy and security stakeholders), but the added upside to illustrating these risks in the scientific literature is limited (and the added downsides much greater). The upside of discussing them in the popular/generalist literature (including EA literature not narrowly targeted at those working on biorisk) is limited still further.
3.2: Information also informs decisions around how to weigh causes relative to one another. Yet less-hazardous information (e.g. the basic motivation given here or here, and you could throw in social epistemic steers from the prevailing views of EA ‘cognoscenti’) is sufficient for most decisions and decision-makers. The cases where this nonetheless might be ‘worth it’ (e.g. you are a decision maker allocating a large pool of human or monetary capital between cause areas) are few and so targeted disclosure (similar to 3.1 above) looks better.
3.3: Beyond the direct cost of potentially giving bad actors good ideas, the benefits of more public discussion may not be very high. There are many ways public discussion could be counter-productive (e.g. alarmism, ill-advised remarks poisoning our relationship with scientific groups, etc.). I’d suggest the examples of cryonics, AI safety, GMOs and other lowlights of public communication of policy and science are relevant cautionary examples.
4: I also want to supply other more general considerations which point towards a very high degree caution:
4.1: In addition to the considerations around the unilateralist’s curse offered by Brian Wang (I have written a bit about this in the context of biotechnology here) there is also an asymmetry in the sense that it is much easier to disclose previously-secret information than make previously-disclosed information secret. The irreversibility of disclosure warrants further caution in cases of uncertainty like this.
4.2: I take the examples of analogous fields to also support great caution. As you note, there is a norm in computer security of ‘don’t publicise a vulnerability until there’s a fix in place’, and initially informing a responsible party to give them the opportunity to to do this pre-publication. Applied to bio, this suggests targeted disclosure to those best placed to mitigate the information hazard, rather than public discussion in the hopes of prompting a fix to be produced. (Not to mention a ‘fix’ in this area might prove much more challenging than pushing a software update.)
4.3: More distantly, adversarial work (e.g. red-teaming exercises) is usually done by professionals, with a concrete decision-relevant objective in mind, with exceptional care paid to operational security, and their results are seldom made publicly available. This is for exercises which generate information hazards for a particular group or organisation—similar or greater caution should apply to exercises that one anticipates could generate information hazardous for everyone.
4.4: Even more distantly, norms of intellectual openness are used more in some areas, and much less in others (compare the research performed in academia to security services). In areas like bio, the fact that a significant proportion of the risk arises from deliberate misuse by malicious actors means security services seem to provide the closer analogy, and ‘public/open discussion’ is seldom found desirable in these contexts.
5: In my work, I try to approach potentially hazardous areas as obliquely as possible, more along the lines of general considerations of the risk landscape or from the perspective of safety-enhancing technologies and countermeasures. I do basically no ‘red-teamy’ types of research (e.g. brainstorm the nastiest things I can think of, figure out the ‘best’ ways of defeating existing protections, etc.)
(Concretely, this would comprise asking questions like, “How are disease surveillance systems forecast to improve over the medium term, and are there any robustly beneficial characteristics for preventing high-consequence events that can be pushed for?” or “Are there relevant limits which give insight to whether surveillance will be a key plank of the ‘next-gen biosecurity’ portfolio?”, and not things like, “What are the most effective approaches to make pathogen X maximally damaging yet minimally detectable?”)
I expect a non-professional doing more red-teamy work would generate less upside (e.g. less well networked to people who may be in a position to mitigate vulnerabilities they discover, less likely to unwittingly duplicate work) and more downside (e.g. less experience with trying to manage info-hazards well) than I. Given I think this work is usually a bad idea for me to do, I think it’s definitely a bad idea for non-professionals to try.
I therefore hope people working independently on this topic approach ‘object level’ work here with similar aversion to more ‘red-teamy’ stuff, or instead focus on improving their capital by gaining credentials/experience/etc. (this has other benefits: a lot of the best levers in biorisk are working with/alongside existing stakeholders rather than striking out on one’s own, and it’s hard to get a role without (e.g.) graduate training in a relevant field). I hope to produce a list of self-contained projects to help direct laudable ‘EA energy’ to the best ends.
Thanks for writing this. How best to manage hazardous information is fraught, and although I have some work in draft and under review, much remains unclear—as you say, almost anything could have some some downside risk, and never discussing anything seems a poor approach.
Yet I strongly disagree with the conclusion that the default should be to discuss potentially hazardous (but non-technical) information publicly, and I think your proposals of how to manage these dangers (e.g. talk to one scientist first) generally err too lax. I provide the substance of this disagreement in a child comment.
I’d strongly endorse a heuristic along the lines of, “Try to avoid coming up with (and don’t publish) things which are novel and potentially dangerous”, with the standard of novelty being a relatively uninformed bad actor rather than an expert (e.g. highlighting/elaborating something dangerous which can be found buried in the scientific literature should be avoided).
This expressly includes more general information as well as particular technical points (e.g. “No one seems to be talking about technology X, but here’s why it has really dangerous misuse potential” would ‘count’, even if a particular ‘worked example’ wasn’t included).
I agree it would be good to have direct channels of communication for people considering things like this to get advice on whether projects they have in mind are wise to pursue, and to communicate concerns they have without feeling they need to resort to internet broadcast (cf. Jan Kulveit’s remark).
To these ends, people with concerns/questions of this nature are warmly welcomed and encouraged to contact me to arrange further discussion.
Thanks for this and subsequent comment which generally helped me to update my views on the problem and become even more cautious in discussing things.
Some thoughts appeared in my mind while reading, maybe I will have more thoughts later:
1. It looks like that all the talk about infohazards could be boiled down to just one thesis: “biorisk is much more serious x-risk than AI safety, but we decided not to acknowledge it, as it could be harmful”.
2. Almost all work in AI safety is based on “red-teaming”: someone comes with an idea X how to make AI safe, and EY appears and say “Actually, this will spectacularly fail because...”. However, the fact that future AI may read that thread of comments and act accordingly to the red-team advise is not considered, because AI is assumed superintelligent and able to create all our ideas from scratch.
3. The idea of infohazards is based on the idea of intellectual advantage of “EA people” over “bad people” when even an arm-chaired futurist could create a dozen ideas how to destroy the world, while professional scientists of some rogue county are sitting completely clueless and have to go to obscure forums for search of inspiration. From the outside point of view, this could look like arrogance. But it could also be interpreted that, in fact, we live in the world where it is very easy to create plausible ways of its destruction, which contributes to the idea of oversaturation of infohazards.
4. People, who study x-risks are most dangerous people in the world as they actually know how to destroy the world. More over, if a “bad agent” ever appear, he is more likely to be some deranged LW-commentator than North Korean officer.
0: We agree potentially hazardous information should only be disclosed (or potentially discovered) when the benefits of disclosure (or discovery) outweigh the downsides. Heuristics can make principles concrete, and a rule of thumb I try to follow is to have a clear objective in mind for gathering or disclosing such information (and being wary of vague justifications like ‘improving background knowledge’ or ‘better epistemic commons’) and incur the least possible information hazard in achieving this.
A further heuristic which seems right to me is one should disclose information in the way that maximally disadvantages bad actors versus good ones. There are a wide spectrum of approaches that could be taken that lie between ‘try to forget about it’, and ‘broadcast publicly’, and I think one of the intermediate options is often best.
1: I disagree with many of the considerations which push towards more open disclosure and discussion.
1.1: I don’t think we should be confident there is little downside in disclosing dangers a sophisticated bad actor would likely rediscover themselves. Not all plausible bad actors are sophisticated: a typical criminal or terrorist is no mastermind, and so may not make (to us) relatively straightforward insights, but could still ‘pick them up’ from elsewhere.
1.2: Although a big fan of epistemic modesty (and generally a detractor of ‘EA exceptionalism’), EAs do have an impressive track record in coming up with novel and important ideas. So there is some chance of coming up with something novel and dangerous even without exceptional effort.
1.3: I emphatically disagree we are at ‘infohazard saturation’ where the situation re. Infohazards ‘can’t get any worse’. I also find it unfathomable ever being confident enough in this claim to base strategy upon its assumption (cf. eukaryote’s comment).
1.4: There are some benefits to getting out ‘in front’ of more reckless disclosure by someone else. Yet in cases where one wouldn’t want to disclose it oneself, delaying the downsides of wide disclosure as long as possible seems usually more important, and so rules against bringing this to an end by disclosing yourself save in (rare) cases one knows disclosure is imminent rather than merely possible.
2: I don’t think there’s a neat distinction between ‘technical dangerous information’ and ‘broader ideas about possible risks’, with the latter being generally safe to publicise and discuss.
2.1: It seems easy to imagine cases where the general idea comprises most of the danger. The conceptual step to a ‘key insight’ of how something could be dangerously misused ‘in principle’ might be much harder to make than subsequent steps from this insight to realising this danger ‘in practice’. In such cases the insight is the key bottleneck for bad actors traversing the risk pipeline, and so comprises a major information hazard.
2.2: For similar reasons, highlighting a neglected-by-public-discussion part of the risk landscape where one suspects information hazards lie has a considerable downside, as increased attention could prompt investigation which brings these currently dormant hazards to light.
3: Even if I take the downside risks as weightier than you, one still needs to weigh these against the benefits. I take the benefit of ‘general (or public) disclosure’ to have little marginal benefit above more limited disclosure targeted to key stakeholders. As the latter approach greatly reduces the downside risks, this is usually the better strategy by the lights of cost/benefit. At least trying targeted disclosure first seems a robustly better strategy than skipping straight to public discussion (cf.).
3.1: In bio (and I think elsewhere) the set of people who are relevant setting strategy and otherwise contributing to reducing a given risk is usually small and known (e.g. particular academics, parts of the government, civil society, and so on). A particular scientist unwittingly performing research with misuse potential might need to know the risks of their work (likewise some relevant policy and security stakeholders), but the added upside to illustrating these risks in the scientific literature is limited (and the added downsides much greater). The upside of discussing them in the popular/generalist literature (including EA literature not narrowly targeted at those working on biorisk) is limited still further.
3.2: Information also informs decisions around how to weigh causes relative to one another. Yet less-hazardous information (e.g. the basic motivation given here or here, and you could throw in social epistemic steers from the prevailing views of EA ‘cognoscenti’) is sufficient for most decisions and decision-makers. The cases where this nonetheless might be ‘worth it’ (e.g. you are a decision maker allocating a large pool of human or monetary capital between cause areas) are few and so targeted disclosure (similar to 3.1 above) looks better.
3.3: Beyond the direct cost of potentially giving bad actors good ideas, the benefits of more public discussion may not be very high. There are many ways public discussion could be counter-productive (e.g. alarmism, ill-advised remarks poisoning our relationship with scientific groups, etc.). I’d suggest the examples of cryonics, AI safety, GMOs and other lowlights of public communication of policy and science are relevant cautionary examples.
4: I also want to supply other more general considerations which point towards a very high degree caution:
4.1: In addition to the considerations around the unilateralist’s curse offered by Brian Wang (I have written a bit about this in the context of biotechnology here) there is also an asymmetry in the sense that it is much easier to disclose previously-secret information than make previously-disclosed information secret. The irreversibility of disclosure warrants further caution in cases of uncertainty like this.
4.2: I take the examples of analogous fields to also support great caution. As you note, there is a norm in computer security of ‘don’t publicise a vulnerability until there’s a fix in place’, and initially informing a responsible party to give them the opportunity to to do this pre-publication. Applied to bio, this suggests targeted disclosure to those best placed to mitigate the information hazard, rather than public discussion in the hopes of prompting a fix to be produced. (Not to mention a ‘fix’ in this area might prove much more challenging than pushing a software update.)
4.3: More distantly, adversarial work (e.g. red-teaming exercises) is usually done by professionals, with a concrete decision-relevant objective in mind, with exceptional care paid to operational security, and their results are seldom made publicly available. This is for exercises which generate information hazards for a particular group or organisation—similar or greater caution should apply to exercises that one anticipates could generate information hazardous for everyone.
4.4: Even more distantly, norms of intellectual openness are used more in some areas, and much less in others (compare the research performed in academia to security services). In areas like bio, the fact that a significant proportion of the risk arises from deliberate misuse by malicious actors means security services seem to provide the closer analogy, and ‘public/open discussion’ is seldom found desirable in these contexts.
5: In my work, I try to approach potentially hazardous areas as obliquely as possible, more along the lines of general considerations of the risk landscape or from the perspective of safety-enhancing technologies and countermeasures. I do basically no ‘red-teamy’ types of research (e.g. brainstorm the nastiest things I can think of, figure out the ‘best’ ways of defeating existing protections, etc.)
(Concretely, this would comprise asking questions like, “How are disease surveillance systems forecast to improve over the medium term, and are there any robustly beneficial characteristics for preventing high-consequence events that can be pushed for?” or “Are there relevant limits which give insight to whether surveillance will be a key plank of the ‘next-gen biosecurity’ portfolio?”, and not things like, “What are the most effective approaches to make pathogen X maximally damaging yet minimally detectable?”)
I expect a non-professional doing more red-teamy work would generate less upside (e.g. less well networked to people who may be in a position to mitigate vulnerabilities they discover, less likely to unwittingly duplicate work) and more downside (e.g. less experience with trying to manage info-hazards well) than I. Given I think this work is usually a bad idea for me to do, I think it’s definitely a bad idea for non-professionals to try.
I therefore hope people working independently on this topic approach ‘object level’ work here with similar aversion to more ‘red-teamy’ stuff, or instead focus on improving their capital by gaining credentials/experience/etc. (this has other benefits: a lot of the best levers in biorisk are working with/alongside existing stakeholders rather than striking out on one’s own, and it’s hard to get a role without (e.g.) graduate training in a relevant field). I hope to produce a list of self-contained projects to help direct laudable ‘EA energy’ to the best ends.