I largely agree with the above, but commenting with my own version.
What I think companies with AI services should do:
Can be done in under a week:
Have a monitored communication channel for people, primarily security researchers, to responsibly disclose potential issues (“Potential Flaw”)
Creating an email (ml-disclosures@) which forwards to an appropriate team
Submissions are promptly responded to with a positive receipt (“Vendor Receipt”)
Have clear guidance (even just a blog post or similar) about what constitutes an issue worth reporting (“Vulnerability”)
Even just a simple paragraph giving a high level overview could be evolved with time
Have a internal procedure/playbook for triaging and responding to potential issues. Here’s some options I think you could have, with heuristics to pick:
Non-Vulnerability: reply that the reported behavior is safe to publicly disclose
Investigation: reply that more investigation is needed, and give a timeline for an updated response
Vulnerability: reply that the reported behavior is a vulnerability, and give a timeline for resolution and release of a fix, as well as updates as those timelines change
Longer term
Have a public bounty program to incentivize responsible disclosure
There even exist products to help companies deploy these sorts of programs
Coordinate with other organizations with similar systems
Confidential communication channels to share potentially critical severity vulnerabilities.
Possibly eventually a central coordinating organization (analogous to MITRE) that de-duplicates work handling broad vulnerabilities—I think this will be more important when there are many more service providers than today.
Coordinate as a field to develop shared understanding of what does and does not constitute a vulnerability. As these systems are still nascent, lots of work needs to be done to define this, and this work is better done with a broad set of perspectives and inputs.
Cultivate positive relationships with responsible researchers and responsible research behaviors
Don’t stifle this kind of research by just outright banning it—which is how you get the only researchers breaking your system are black hats
Create programs and procedures specifically to encourage and enable this kind of research in ways that are mutually beneficial
Reward and publicly credit researchers that do good work
I largely agree with the above, but commenting with my own version.
What I think companies with AI services should do:
Can be done in under a week:
Have a monitored communication channel for people, primarily security researchers, to responsibly disclose potential issues (“Potential Flaw”)
Creating an email (ml-disclosures@) which forwards to an appropriate team
Submissions are promptly responded to with a positive receipt (“Vendor Receipt”)
Have clear guidance (even just a blog post or similar) about what constitutes an issue worth reporting (“Vulnerability”)
Even just a simple paragraph giving a high level overview could be evolved with time
Have a internal procedure/playbook for triaging and responding to potential issues. Here’s some options I think you could have, with heuristics to pick:
Non-Vulnerability: reply that the reported behavior is safe to publicly disclose
Investigation: reply that more investigation is needed, and give a timeline for an updated response
Vulnerability: reply that the reported behavior is a vulnerability, and give a timeline for resolution and release of a fix, as well as updates as those timelines change
Longer term
Have a public bounty program to incentivize responsible disclosure
There even exist products to help companies deploy these sorts of programs
Coordinate with other organizations with similar systems
Confidential communication channels to share potentially critical severity vulnerabilities.
Possibly eventually a central coordinating organization (analogous to MITRE) that de-duplicates work handling broad vulnerabilities—I think this will be more important when there are many more service providers than today.
Coordinate as a field to develop shared understanding of what does and does not constitute a vulnerability. As these systems are still nascent, lots of work needs to be done to define this, and this work is better done with a broad set of perspectives and inputs.
Cultivate positive relationships with responsible researchers and responsible research behaviors
Don’t stifle this kind of research by just outright banning it—which is how you get the only researchers breaking your system are black hats
Create programs and procedures specifically to encourage and enable this kind of research in ways that are mutually beneficial
Reward and publicly credit researchers that do good work
References:
Responsible Vulnerability Disclosure Process (draft IETF report)
Guidelines for Security Vulnerability Reporting and Response (Organization for Internet Safety)
Probably better more updated ones but I’ve been out of this field for a long time