Useful comparison; but I’d say AI is better compared to biology than to computer security at the moment. Making the reality of the situation more comparable to computer security would be great. There’s some sort of continuity you could draw between them in terms of how possible it is to defend against risks. In general the thing I want to advocate is being the appropriate amount of cautious for a given level of risk, and I believe that AI is in a situation best compared to gain-of-function research on viruses at the moment. Don’t publish research that aids gain-of-function researchers without the ability to defend against what they’re going to come up with based on it. And right now, we’re not remotely close to being able to defend current minds—human and AI—against the long tail of dangerous outcomes of gain-of-function AI research. If that were to become different, then it would look like the nodes are getting yellower and yellower as we go, and as a result, a fading need to worry that people are making red nodes easier to reach. Once you can mostly reliably defend and the community can come up with a reliable defense fast, it becomes a lot more reasonable to publish things that produce gain-of-function.
My issue is: right now, all the ideas for how to make defenses better help gain-of-function a lot, and people regularly write papers with justifications for their research that sound to me like the intro of a gain-of-function biology paper. “There’s a bad thing, and we need to defend against it. To research this, we made it worse, in the hope that this would teach us how it works...”
[edit: pinned to profile]
Useful comparison; but I’d say AI is better compared to biology than to computer security at the moment. Making the reality of the situation more comparable to computer security would be great. There’s some sort of continuity you could draw between them in terms of how possible it is to defend against risks. In general the thing I want to advocate is being the appropriate amount of cautious for a given level of risk, and I believe that AI is in a situation best compared to gain-of-function research on viruses at the moment. Don’t publish research that aids gain-of-function researchers without the ability to defend against what they’re going to come up with based on it. And right now, we’re not remotely close to being able to defend current minds—human and AI—against the long tail of dangerous outcomes of gain-of-function AI research. If that were to become different, then it would look like the nodes are getting yellower and yellower as we go, and as a result, a fading need to worry that people are making red nodes easier to reach. Once you can mostly reliably defend and the community can come up with a reliable defense fast, it becomes a lot more reasonable to publish things that produce gain-of-function.
My issue is: right now, all the ideas for how to make defenses better help gain-of-function a lot, and people regularly write papers with justifications for their research that sound to me like the intro of a gain-of-function biology paper. “There’s a bad thing, and we need to defend against it. To research this, we made it worse, in the hope that this would teach us how it works...”