0: We agree potentially hazardous information should only be disclosed (or potentially discovered) when the benefits of disclosure (or discovery) outweigh the downsides. Heuristics can make principles concrete, and a rule of thumb I try to follow is to have a clear objective in mind for gathering or disclosing such information (and being wary of vague justifications like ‘improving background knowledge’ or ‘better epistemic commons’) and incur the least possible information hazard in achieving this.
A further heuristic which seems right to me is one should disclose information in the way that maximally disadvantages bad actors versus good ones. There are a wide spectrum of approaches that could be taken that lie between ‘try to forget about it’, and ‘broadcast publicly’, and I think one of the intermediate options is often best.
1: I disagree with many of the considerations which push towards more open disclosure and discussion.
1.1: I don’t think we should be confident there is little downside in disclosing dangers a sophisticated bad actor would likely rediscover themselves. Not all plausible bad actors are sophisticated: a typical criminal or terrorist is no mastermind, and so may not make (to us) relatively straightforward insights, but could still ‘pick them up’ from elsewhere.
1.2: Although a big fan of epistemic modesty (and generally a detractor of ‘EA exceptionalism’), EAs do have an impressive track record in coming up with novel and important ideas. So there is some chance of coming up with something novel and dangerous even without exceptional effort.
1.3: I emphatically disagree we are at ‘infohazard saturation’ where the situation re. Infohazards ‘can’t get any worse’. I also find it unfathomable ever being confident enough in this claim to base strategy upon its assumption (cf. eukaryote’s comment).
1.4: There are some benefits to getting out ‘in front’ of more reckless disclosure by someone else. Yet in cases where one wouldn’t want to disclose it oneself, delaying the downsides of wide disclosure as long as possible seems usually more important, and so rules against bringing this to an end by disclosing yourself save in (rare) cases one knows disclosure is imminent rather than merely possible.
2: I don’t think there’s a neat distinction between ‘technical dangerous information’ and ‘broader ideas about possible risks’, with the latter being generally safe to publicise and discuss.
2.1: It seems easy to imagine cases where the general idea comprises most of the danger. The conceptual step to a ‘key insight’ of how something could be dangerously misused ‘in principle’ might be much harder to make than subsequent steps from this insight to realising this danger ‘in practice’. In such cases the insight is the key bottleneck for bad actors traversing the risk pipeline, and so comprises a major information hazard.
2.2: For similar reasons, highlighting a neglected-by-public-discussion part of the risk landscape where one suspects information hazards lie has a considerable downside, as increased attention could prompt investigation which brings these currently dormant hazards to light.
3: Even if I take the downside risks as weightier than you, one still needs to weigh these against the benefits. I take the benefit of ‘general (or public) disclosure’ to have little marginal benefit above more limited disclosure targeted to key stakeholders. As the latter approach greatly reduces the downside risks, this is usually the better strategy by the lights of cost/benefit. At least trying targeted disclosure first seems a robustly better strategy than skipping straight to public discussion (cf.).
3.1: In bio (and I think elsewhere) the set of people who are relevant setting strategy and otherwise contributing to reducing a given risk is usually small and known (e.g. particular academics, parts of the government, civil society, and so on). A particular scientist unwittingly performing research with misuse potential might need to know the risks of their work (likewise some relevant policy and security stakeholders), but the added upside to illustrating these risks in the scientific literature is limited (and the added downsides much greater). The upside of discussing them in the popular/generalist literature (including EA literature not narrowly targeted at those working on biorisk) is limited still further.
3.2: Information also informs decisions around how to weigh causes relative to one another. Yet less-hazardous information (e.g. the basic motivation given here or here, and you could throw in social epistemic steers from the prevailing views of EA ‘cognoscenti’) is sufficient for most decisions and decision-makers. The cases where this nonetheless might be ‘worth it’ (e.g. you are a decision maker allocating a large pool of human or monetary capital between cause areas) are few and so targeted disclosure (similar to 3.1 above) looks better.
3.3: Beyond the direct cost of potentially giving bad actors good ideas, the benefits of more public discussion may not be very high. There are many ways public discussion could be counter-productive (e.g. alarmism, ill-advised remarks poisoning our relationship with scientific groups, etc.). I’d suggest the examples of cryonics, AI safety, GMOs and other lowlights of public communication of policy and science are relevant cautionary examples.
4: I also want to supply other more general considerations which point towards a very high degree caution:
4.1: In addition to the considerations around the unilateralist’s curse offered by Brian Wang (I have written a bit about this in the context of biotechnology here) there is also an asymmetry in the sense that it is much easier to disclose previously-secret information than make previously-disclosed information secret. The irreversibility of disclosure warrants further caution in cases of uncertainty like this.
4.2: I take the examples of analogous fields to also support great caution. As you note, there is a norm in computer security of ‘don’t publicise a vulnerability until there’s a fix in place’, and initially informing a responsible party to give them the opportunity to to do this pre-publication. Applied to bio, this suggests targeted disclosure to those best placed to mitigate the information hazard, rather than public discussion in the hopes of prompting a fix to be produced. (Not to mention a ‘fix’ in this area might prove much more challenging than pushing a software update.)
4.3: More distantly, adversarial work (e.g. red-teaming exercises) is usually done by professionals, with a concrete decision-relevant objective in mind, with exceptional care paid to operational security, and their results are seldom made publicly available. This is for exercises which generate information hazards for a particular group or organisation—similar or greater caution should apply to exercises that one anticipates could generate information hazardous for everyone.
4.4: Even more distantly, norms of intellectual openness are used more in some areas, and much less in others (compare the research performed in academia to security services). In areas like bio, the fact that a significant proportion of the risk arises from deliberate misuse by malicious actors means security services seem to provide the closer analogy, and ‘public/open discussion’ is seldom found desirable in these contexts.
5: In my work, I try to approach potentially hazardous areas as obliquely as possible, more along the lines of general considerations of the risk landscape or from the perspective of safety-enhancing technologies and countermeasures. I do basically no ‘red-teamy’ types of research (e.g. brainstorm the nastiest things I can think of, figure out the ‘best’ ways of defeating existing protections, etc.)
(Concretely, this would comprise asking questions like, “How are disease surveillance systems forecast to improve over the medium term, and are there any robustly beneficial characteristics for preventing high-consequence events that can be pushed for?” or “Are there relevant limits which give insight to whether surveillance will be a key plank of the ‘next-gen biosecurity’ portfolio?”, and not things like, “What are the most effective approaches to make pathogen X maximally damaging yet minimally detectable?”)
I expect a non-professional doing more red-teamy work would generate less upside (e.g. less well networked to people who may be in a position to mitigate vulnerabilities they discover, less likely to unwittingly duplicate work) and more downside (e.g. less experience with trying to manage info-hazards well) than I. Given I think this work is usually a bad idea for me to do, I think it’s definitely a bad idea for non-professionals to try.
I therefore hope people working independently on this topic approach ‘object level’ work here with similar aversion to more ‘red-teamy’ stuff, or instead focus on improving their capital by gaining credentials/experience/etc. (this has other benefits: a lot of the best levers in biorisk are working with/alongside existing stakeholders rather than striking out on one’s own, and it’s hard to get a role without (e.g.) graduate training in a relevant field). I hope to produce a list of self-contained projects to help direct laudable ‘EA energy’ to the best ends.
0: We agree potentially hazardous information should only be disclosed (or potentially discovered) when the benefits of disclosure (or discovery) outweigh the downsides. Heuristics can make principles concrete, and a rule of thumb I try to follow is to have a clear objective in mind for gathering or disclosing such information (and being wary of vague justifications like ‘improving background knowledge’ or ‘better epistemic commons’) and incur the least possible information hazard in achieving this.
A further heuristic which seems right to me is one should disclose information in the way that maximally disadvantages bad actors versus good ones. There are a wide spectrum of approaches that could be taken that lie between ‘try to forget about it’, and ‘broadcast publicly’, and I think one of the intermediate options is often best.
1: I disagree with many of the considerations which push towards more open disclosure and discussion.
1.1: I don’t think we should be confident there is little downside in disclosing dangers a sophisticated bad actor would likely rediscover themselves. Not all plausible bad actors are sophisticated: a typical criminal or terrorist is no mastermind, and so may not make (to us) relatively straightforward insights, but could still ‘pick them up’ from elsewhere.
1.2: Although a big fan of epistemic modesty (and generally a detractor of ‘EA exceptionalism’), EAs do have an impressive track record in coming up with novel and important ideas. So there is some chance of coming up with something novel and dangerous even without exceptional effort.
1.3: I emphatically disagree we are at ‘infohazard saturation’ where the situation re. Infohazards ‘can’t get any worse’. I also find it unfathomable ever being confident enough in this claim to base strategy upon its assumption (cf. eukaryote’s comment).
1.4: There are some benefits to getting out ‘in front’ of more reckless disclosure by someone else. Yet in cases where one wouldn’t want to disclose it oneself, delaying the downsides of wide disclosure as long as possible seems usually more important, and so rules against bringing this to an end by disclosing yourself save in (rare) cases one knows disclosure is imminent rather than merely possible.
2: I don’t think there’s a neat distinction between ‘technical dangerous information’ and ‘broader ideas about possible risks’, with the latter being generally safe to publicise and discuss.
2.1: It seems easy to imagine cases where the general idea comprises most of the danger. The conceptual step to a ‘key insight’ of how something could be dangerously misused ‘in principle’ might be much harder to make than subsequent steps from this insight to realising this danger ‘in practice’. In such cases the insight is the key bottleneck for bad actors traversing the risk pipeline, and so comprises a major information hazard.
2.2: For similar reasons, highlighting a neglected-by-public-discussion part of the risk landscape where one suspects information hazards lie has a considerable downside, as increased attention could prompt investigation which brings these currently dormant hazards to light.
3: Even if I take the downside risks as weightier than you, one still needs to weigh these against the benefits. I take the benefit of ‘general (or public) disclosure’ to have little marginal benefit above more limited disclosure targeted to key stakeholders. As the latter approach greatly reduces the downside risks, this is usually the better strategy by the lights of cost/benefit. At least trying targeted disclosure first seems a robustly better strategy than skipping straight to public discussion (cf.).
3.1: In bio (and I think elsewhere) the set of people who are relevant setting strategy and otherwise contributing to reducing a given risk is usually small and known (e.g. particular academics, parts of the government, civil society, and so on). A particular scientist unwittingly performing research with misuse potential might need to know the risks of their work (likewise some relevant policy and security stakeholders), but the added upside to illustrating these risks in the scientific literature is limited (and the added downsides much greater). The upside of discussing them in the popular/generalist literature (including EA literature not narrowly targeted at those working on biorisk) is limited still further.
3.2: Information also informs decisions around how to weigh causes relative to one another. Yet less-hazardous information (e.g. the basic motivation given here or here, and you could throw in social epistemic steers from the prevailing views of EA ‘cognoscenti’) is sufficient for most decisions and decision-makers. The cases where this nonetheless might be ‘worth it’ (e.g. you are a decision maker allocating a large pool of human or monetary capital between cause areas) are few and so targeted disclosure (similar to 3.1 above) looks better.
3.3: Beyond the direct cost of potentially giving bad actors good ideas, the benefits of more public discussion may not be very high. There are many ways public discussion could be counter-productive (e.g. alarmism, ill-advised remarks poisoning our relationship with scientific groups, etc.). I’d suggest the examples of cryonics, AI safety, GMOs and other lowlights of public communication of policy and science are relevant cautionary examples.
4: I also want to supply other more general considerations which point towards a very high degree caution:
4.1: In addition to the considerations around the unilateralist’s curse offered by Brian Wang (I have written a bit about this in the context of biotechnology here) there is also an asymmetry in the sense that it is much easier to disclose previously-secret information than make previously-disclosed information secret. The irreversibility of disclosure warrants further caution in cases of uncertainty like this.
4.2: I take the examples of analogous fields to also support great caution. As you note, there is a norm in computer security of ‘don’t publicise a vulnerability until there’s a fix in place’, and initially informing a responsible party to give them the opportunity to to do this pre-publication. Applied to bio, this suggests targeted disclosure to those best placed to mitigate the information hazard, rather than public discussion in the hopes of prompting a fix to be produced. (Not to mention a ‘fix’ in this area might prove much more challenging than pushing a software update.)
4.3: More distantly, adversarial work (e.g. red-teaming exercises) is usually done by professionals, with a concrete decision-relevant objective in mind, with exceptional care paid to operational security, and their results are seldom made publicly available. This is for exercises which generate information hazards for a particular group or organisation—similar or greater caution should apply to exercises that one anticipates could generate information hazardous for everyone.
4.4: Even more distantly, norms of intellectual openness are used more in some areas, and much less in others (compare the research performed in academia to security services). In areas like bio, the fact that a significant proportion of the risk arises from deliberate misuse by malicious actors means security services seem to provide the closer analogy, and ‘public/open discussion’ is seldom found desirable in these contexts.
5: In my work, I try to approach potentially hazardous areas as obliquely as possible, more along the lines of general considerations of the risk landscape or from the perspective of safety-enhancing technologies and countermeasures. I do basically no ‘red-teamy’ types of research (e.g. brainstorm the nastiest things I can think of, figure out the ‘best’ ways of defeating existing protections, etc.)
(Concretely, this would comprise asking questions like, “How are disease surveillance systems forecast to improve over the medium term, and are there any robustly beneficial characteristics for preventing high-consequence events that can be pushed for?” or “Are there relevant limits which give insight to whether surveillance will be a key plank of the ‘next-gen biosecurity’ portfolio?”, and not things like, “What are the most effective approaches to make pathogen X maximally damaging yet minimally detectable?”)
I expect a non-professional doing more red-teamy work would generate less upside (e.g. less well networked to people who may be in a position to mitigate vulnerabilities they discover, less likely to unwittingly duplicate work) and more downside (e.g. less experience with trying to manage info-hazards well) than I. Given I think this work is usually a bad idea for me to do, I think it’s definitely a bad idea for non-professionals to try.
I therefore hope people working independently on this topic approach ‘object level’ work here with similar aversion to more ‘red-teamy’ stuff, or instead focus on improving their capital by gaining credentials/experience/etc. (this has other benefits: a lot of the best levers in biorisk are working with/alongside existing stakeholders rather than striking out on one’s own, and it’s hard to get a role without (e.g.) graduate training in a relevant field). I hope to produce a list of self-contained projects to help direct laudable ‘EA energy’ to the best ends.