This seems like one of the most tractable things to address to reduce AI risk.
If 5 years from now anyone developing AI or biotechnology is still not thinking (early and seriously) about ways their work could cause harm that other people have been talking about for years, I think we should consider ourselves to have failed.
Your paper also says that “[w]ithout being overly alarmist, this should serve as a wake-up call for our colleagues” — what is it that you want your colleagues to wake up to? And what do you think that being overly alarmist would look like?
We just want more researchers to acknowledge and be aware of potential misuse. When you start working in the chemistry space, you do get informed about misuse of chemistry, and you’re sort of responsible for making sure you avoid that as much as possible. In machine learning, there’s nothing of the sort. There’s no guidance on misuse of the technology.
I know I’m not saying anything new here, and I’m merely a layperson without ability to verify the truth of the claim I highlighted in bold above, but I do want to emphasize further:
I seems clear that changing the machine learning space so that it is like the chemistry space in the sense that you do get informed about ways machine learning can be misused and cause harm, is something that we should all push to make happen as soon as we can. (Also expanding the discussion of potential harm beyond harm caused from misuse to any harm related to the technology.)
Years ago I recall hearing Stuart Russell mention the analogy of how civil engineers don’t have a separate field for bridge safety; rather bridge safety is something all bridge designers are educated on and concerned about, and he similarly doesn’t want the field of AI safety to be separate from AI but wants all people working on AI to be educated on and concerned with risks from AI.
This is the same point I’m saying here, and I’m saying it again because it seems like the present machine learning space is still far from this point and we as a community really do need to devote more efforts to ensuring that we change this in the near future.
This seems like one of the most tractable things to address to reduce AI risk.
If 5 years from now anyone developing AI or biotechnology is still not thinking (early and seriously) about ways their work could cause harm that other people have been talking about for years, I think we should consider ourselves to have failed.
To add some more emphasis to my point, because I think it deserves more emphasis:
Quoting the interview Jacy linked to:
I know I’m not saying anything new here, and I’m merely a layperson without ability to verify the truth of the claim I highlighted in bold above, but I do want to emphasize further:
I seems clear that changing the machine learning space so that it is like the chemistry space in the sense that you do get informed about ways machine learning can be misused and cause harm, is something that we should all push to make happen as soon as we can. (Also expanding the discussion of potential harm beyond harm caused from misuse to any harm related to the technology.)
Years ago I recall hearing Stuart Russell mention the analogy of how civil engineers don’t have a separate field for bridge safety; rather bridge safety is something all bridge designers are educated on and concerned about, and he similarly doesn’t want the field of AI safety to be separate from AI but wants all people working on AI to be educated on and concerned with risks from AI.
This is the same point I’m saying here, and I’m saying it again because it seems like the present machine learning space is still far from this point and we as a community really do need to devote more efforts to ensuring that we change this in the near future.