In computer security, there is an ongoing debate about vulnerability disclosure, which at present seems to have settled on ‘if you aren’t running a bug bounty program for your software you’re irresponsible, project zero gets it right, metasploit is a net good, and it’s ok to make exploits for hackers ideologically aligned with you’.
The framing of the question for decades was essentially “do you tell the person or company
with the vulnerable software, who may ignore you or sue you because they don’t want to spend money? Do you tell the public, where someone might adapt your report into an attack?
Of course, there is the (generally believed to be) unethical option chosen by many “sell it to someone who will use it, and will protect your identity as the author from people who might retaliate”
There was an alternative called ‘antisec’: https://en.m.wikipedia.org/wiki/Antisec_Movement which basically argued ‘dont tell people about exploits, they’re expensive to make, very few people develop the talents to smash the stack for fun and profit, and once they’re out, they’re easy to use to cause mayhem’.
They did not go anywhere, and the antisec viewpoint is not present in any mainstream discussion about vulnerability ethics.
Alternatively, nations have broadly worked together to not publicly disclose technical data that would make building nuclear bombs simple. It is an exercise for the reader to determine whether it has worked.
So, the ideas here have been tried in different fields, with mixed results.
Useful comparison; but I’d say AI is better compared to biology than to computer security at the moment. Making the reality of the situation more comparable to computer security would be great. There’s some sort of continuity you could draw between them in terms of how possible it is to defend against risks. In general the thing I want to advocate is being the appropriate amount of cautious for a given level of risk, and I believe that AI is in a situation best compared to gain-of-function research on viruses at the moment. Don’t publish research that aids gain-of-function researchers without the ability to defend against what they’re going to come up with based on it. And right now, we’re not remotely close to being able to defend current minds—human and AI—against the long tail of dangerous outcomes of gain-of-function AI research. If that were to become different, then it would look like the nodes are getting yellower and yellower as we go, and as a result, a fading need to worry that people are making red nodes easier to reach. Once you can mostly reliably defend and the community can come up with a reliable defense fast, it becomes a lot more reasonable to publish things that produce gain-of-function.
My issue is: right now, all the ideas for how to make defenses better help gain-of-function a lot, and people regularly write papers with justifications for their research that sound to me like the intro of a gain-of-function biology paper. “There’s a bad thing, and we need to defend against it. To research this, we made it worse, in the hope that this would teach us how it works...”
In computer security, there is an ongoing debate about vulnerability disclosure, which at present seems to have settled on ‘if you aren’t running a bug bounty program for your software you’re irresponsible, project zero gets it right, metasploit is a net good, and it’s ok to make exploits for hackers ideologically aligned with you’.
The framing of the question for decades was essentially “do you tell the person or company
with the vulnerable software, who may ignore you or sue you because they don’t want to spend money? Do you tell the public, where someone might adapt your report into an attack?
Of course, there is the (generally believed to be) unethical option chosen by many “sell it to someone who will use it, and will protect your identity as the author from people who might retaliate”
There was an alternative called ‘antisec’: https://en.m.wikipedia.org/wiki/Antisec_Movement which basically argued ‘dont tell people about exploits, they’re expensive to make, very few people develop the talents to smash the stack for fun and profit, and once they’re out, they’re easy to use to cause mayhem’.
They did not go anywhere, and the antisec viewpoint is not present in any mainstream discussion about vulnerability ethics.
Alternatively, nations have broadly worked together to not publicly disclose technical data that would make building nuclear bombs simple. It is an exercise for the reader to determine whether it has worked.
So, the ideas here have been tried in different fields, with mixed results.
[edit: pinned to profile]
Useful comparison; but I’d say AI is better compared to biology than to computer security at the moment. Making the reality of the situation more comparable to computer security would be great. There’s some sort of continuity you could draw between them in terms of how possible it is to defend against risks. In general the thing I want to advocate is being the appropriate amount of cautious for a given level of risk, and I believe that AI is in a situation best compared to gain-of-function research on viruses at the moment. Don’t publish research that aids gain-of-function researchers without the ability to defend against what they’re going to come up with based on it. And right now, we’re not remotely close to being able to defend current minds—human and AI—against the long tail of dangerous outcomes of gain-of-function AI research. If that were to become different, then it would look like the nodes are getting yellower and yellower as we go, and as a result, a fading need to worry that people are making red nodes easier to reach. Once you can mostly reliably defend and the community can come up with a reliable defense fast, it becomes a lot more reasonable to publish things that produce gain-of-function.
My issue is: right now, all the ideas for how to make defenses better help gain-of-function a lot, and people regularly write papers with justifications for their research that sound to me like the intro of a gain-of-function biology paper. “There’s a bad thing, and we need to defend against it. To research this, we made it worse, in the hope that this would teach us how it works...”