If many info-hazards have already been openly published, the world may be considered saturated with info-hazards, as a malevolent agent already has access to so much dangerous information. In our world, where genomes of the pandemic flus have been openly published, it is difficult to make the situation worse.
I strongly disagree that we’re in a world of accessible easy catastrophic information right now.
This is based on a lot of background knowledge, but as a good start, Sonia Ben Ouagrham-Gormley makes a strong case that bioweapons groups historically have had very difficult times creating usable weapons even when they already have a viable pathogen. Having a flu genome online doesn’t solve any of the other problems of weapons creation. While biotechnology has certainly progressed since major historic programs, and more info and procedures of various kinds are online, I still don’t see the case for lots of highly destructive technology being easily available.
If you do not believe that we’re at that future of plenty of calamitous information easily available online, but believe we could conceivably get there, then the proposed strategy of openly discussing GCR-related infohazards is extremely dangerous, because it pushes us there even faster.
If the reader thinks we’re probably already there, I’d ask how confident they are. Getting it wrong carries a very high cost, and it’s not clear to me that having lots of infohazards publicly available is the correct response, even for moderately high certainty that we’re in “lots of GCR instruction manuals online” world. (For starters, publication has a circuitous path to positive impact at best. You have to get them to the right eyes.)
Other thoughts:
The steps for checking a possibly-dangerous idea before you put it online, including running it by multiple wise knowledgeable people and trying to see if it’s been discovered already, and doing analysis in a way that won’t get enormous publicity, seem like good heuristics for potentially risky ideas. Although if you think you’ve found something profoundly dangerous, you probably don’t even want to type it into Google.
Re: dangerous-but-simple ideas being easy to find: It seems that for some reason or other, bioterrorism and bioweapons programs are very rare these days. This suggests to me that there could be a major risk in the form of inadvertently convincing non-bio malicious actors to switch to bio—by perhaps suggesting a new idea that fulfils their goals or is within their means. We as humans are in a bad place to competently judge whether ideas that are obvious to us are also obvious to everybody else. So while inferential distance is a real and important thing, I’d suggest against being blindly incautious with “obvious” ideas.
(Anyways, this isn’t to say such things shouldn’t be researched or addressed, but there’s a vast difference between “turn off your computer and never speak of this again” and “post widely in public forums; scream from the rooftops”, and many useful actions between the two.)
(Please note that all of this is my own opinion and doesn’t reflect that of my employer or sponsors.)
Another point. A lot of people on this forum discusses potential risks of superintelligent AI. However, such public discussion may advertise the idea of AI as an instrument of global domination. The problem was recognised by Seth Baum in one of his articles ( cant’ find the link).
Would the world, there nobody publicly discusses the problem of AI alignment, be a better one? Probably not, because in that case all EY’s outreach would not happen, and not much research in AI alignment will be ever done. In that case, chances on creation of the beneficial AI are slim.
I strongly disagree that we’re in a world of accessible easy catastrophic information right now.
This is based on a lot of background knowledge, but as a good start, Sonia Ben Ouagrham-Gormley makes a strong case that bioweapons groups historically have had very difficult times creating usable weapons even when they already have a viable pathogen. Having a flu genome online doesn’t solve any of the other problems of weapons creation. While biotechnology has certainly progressed since major historic programs, and more info and procedures of various kinds are online, I still don’t see the case for lots of highly destructive technology being easily available.
If you do not believe that we’re at that future of plenty of calamitous information easily available online, but believe we could conceivably get there, then the proposed strategy of openly discussing GCR-related infohazards is extremely dangerous, because it pushes us there even faster.
If the reader thinks we’re probably already there, I’d ask how confident they are. Getting it wrong carries a very high cost, and it’s not clear to me that having lots of infohazards publicly available is the correct response, even for moderately high certainty that we’re in “lots of GCR instruction manuals online” world. (For starters, publication has a circuitous path to positive impact at best. You have to get them to the right eyes.)
Other thoughts:
The steps for checking a possibly-dangerous idea before you put it online, including running it by multiple wise knowledgeable people and trying to see if it’s been discovered already, and doing analysis in a way that won’t get enormous publicity, seem like good heuristics for potentially risky ideas. Although if you think you’ve found something profoundly dangerous, you probably don’t even want to type it into Google.
Re: dangerous-but-simple ideas being easy to find: It seems that for some reason or other, bioterrorism and bioweapons programs are very rare these days. This suggests to me that there could be a major risk in the form of inadvertently convincing non-bio malicious actors to switch to bio—by perhaps suggesting a new idea that fulfils their goals or is within their means. We as humans are in a bad place to competently judge whether ideas that are obvious to us are also obvious to everybody else. So while inferential distance is a real and important thing, I’d suggest against being blindly incautious with “obvious” ideas.
(Anyways, this isn’t to say such things shouldn’t be researched or addressed, but there’s a vast difference between “turn off your computer and never speak of this again” and “post widely in public forums; scream from the rooftops”, and many useful actions between the two.)
(Please note that all of this is my own opinion and doesn’t reflect that of my employer or sponsors.)
I thinks that Sophia is wrong for several reasons discussing which may be regarded as info hazard, so I could PM you, if you are interested.
Another point. A lot of people on this forum discusses potential risks of superintelligent AI. However, such public discussion may advertise the idea of AI as an instrument of global domination. The problem was recognised by Seth Baum in one of his articles ( cant’ find the link).
Would the world, there nobody publicly discusses the problem of AI alignment, be a better one? Probably not, because in that case all EY’s outreach would not happen, and not much research in AI alignment will be ever done. In that case, chances on creation of the beneficial AI are slim.