Our drug discovery company received an invitation to contribute a presentation on how AI technologies for drug discovery could potentially be misused.
Risk of misuse
The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting. Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery. We have spent decades using computers and AI to improve human health—not to degrade it. We were naive in thinking about the potential misuse of our trade, as our aim had always been to avoid molecular features that could interfere with the many different classes of proteins essential to human life. Even our projects on Ebola and neurotoxins, which could have sparked thoughts about the potential negative implications of our machine learning models, had not set our alarm bells ringing.
Our company—Collaborations Pharmaceuticals, Inc.—had recently published computational machine learning models for toxicity prediction in different areas, and, in developing our presentation to the Spiez meeting, we opted to explore how AI could be used to design toxic molecules. It was a thought exercise we had not considered before that ultimately evolved into a computational proof of concept for making biochemical weapons.
This seems like one of the most tractable things to address to reduce AI risk.
If 5 years from now anyone developing AI or biotechnology is still not thinking (early and seriously) about ways their work could cause harm that other people have been talking about for years, I think we should consider ourselves to have failed.
Your paper also says that “[w]ithout being overly alarmist, this should serve as a wake-up call for our colleagues” — what is it that you want your colleagues to wake up to? And what do you think that being overly alarmist would look like?
We just want more researchers to acknowledge and be aware of potential misuse. When you start working in the chemistry space, you do get informed about misuse of chemistry, and you’re sort of responsible for making sure you avoid that as much as possible. In machine learning, there’s nothing of the sort. There’s no guidance on misuse of the technology.
I know I’m not saying anything new here, and I’m merely a layperson without ability to verify the truth of the claim I highlighted in bold above, but I do want to emphasize further:
I seems clear that changing the machine learning space so that it is like the chemistry space in the sense that you do get informed about ways machine learning can be misused and cause harm, is something that we should all push to make happen as soon as we can. (Also expanding the discussion of potential harm beyond harm caused from misuse to any harm related to the technology.)
Years ago I recall hearing Stuart Russell mention the analogy of how civil engineers don’t have a separate field for bridge safety; rather bridge safety is something all bridge designers are educated on and concerned about, and he similarly doesn’t want the field of AI safety to be separate from AI but wants all people working on AI to be educated on and concerned with risks from AI.
This is the same point I’m saying here, and I’m saying it again because it seems like the present machine learning space is still far from this point and we as a community really do need to devote more efforts to ensuring that we change this in the near future.
Had you seen the researcher explanation for the March 2022 “AI suggested 40,000 new possible chemical weapons in just six hours” paper? I quote (paywall):
This seems like one of the most tractable things to address to reduce AI risk.
If 5 years from now anyone developing AI or biotechnology is still not thinking (early and seriously) about ways their work could cause harm that other people have been talking about for years, I think we should consider ourselves to have failed.
To add some more emphasis to my point, because I think it deserves more emphasis:
Quoting the interview Jacy linked to:
I know I’m not saying anything new here, and I’m merely a layperson without ability to verify the truth of the claim I highlighted in bold above, but I do want to emphasize further:
I seems clear that changing the machine learning space so that it is like the chemistry space in the sense that you do get informed about ways machine learning can be misused and cause harm, is something that we should all push to make happen as soon as we can. (Also expanding the discussion of potential harm beyond harm caused from misuse to any harm related to the technology.)
Years ago I recall hearing Stuart Russell mention the analogy of how civil engineers don’t have a separate field for bridge safety; rather bridge safety is something all bridge designers are educated on and concerned about, and he similarly doesn’t want the field of AI safety to be separate from AI but wants all people working on AI to be educated on and concerned with risks from AI.
This is the same point I’m saying here, and I’m saying it again because it seems like the present machine learning space is still far from this point and we as a community really do need to devote more efforts to ensuring that we change this in the near future.