I am pretty concerned that most of the public discussion about risk from e.g. the practice of open sourcing frontier models is focused on misuse risk (particular biorisk). Misuse risk seems like it could be a real thing, but it’s not where I see most of the negative EV, when it comes to open sourcing frontier models. I also suspect that many people doing comms work which focuses on misuse risk are focusing on misuse risk in ways that are strongly disproportionate to how much of the negative EV they see coming from it, relative to all sources.
I think someone should write a summary post covering “why open-sourcing frontier models and AI capabilities more generally is -EV”. Key points to hit:
(1st order) directly accelerating capabilities research progress
(1st order) we haven’t totally ruled out the possibility of hitting “sufficiently capable systems” which are at least possible in principle to use in +EV ways, but which if made public would immediately have someone point them at improving themselves and then we die. (In fact, this is very approximately the mainline alignment plan of all 3 major AGI orgs.)
(2nd order) generic “draws in more money, more attention, more skilled talent, etc” which seems like it burns timelines
And, sure, misuse risks (which in practice might end up being a subset of the second bullet point, but not necessarily so). But in reality, LLM-based misuse risks probably don’t end up being x-risks, unless biology turns out to be so shockingly easy that a (relatively) dumb system can come up with something that gets ~everyone in one go.
I am pretty concerned that most of the public discussion about risk from e.g. the practice of open sourcing frontier models is focused on misuse risk (particular biorisk). Misuse risk seems like it could be a real thing, but it’s not where I see most of the negative EV, when it comes to open sourcing frontier models. I also suspect that many people doing comms work which focuses on misuse risk are focusing on misuse risk in ways that are strongly disproportionate to how much of the negative EV they see coming from it, relative to all sources.
I think someone should write a summary post covering “why open-sourcing frontier models and AI capabilities more generally is -EV”. Key points to hit:
(1st order) directly accelerating capabilities research progress
(1st order) we haven’t totally ruled out the possibility of hitting “sufficiently capable systems” which are at least possible in principle to use in +EV ways, but which if made public would immediately have someone point them at improving themselves and then we die. (In fact, this is very approximately the mainline alignment plan of all 3 major AGI orgs.)
(2nd order) generic “draws in more money, more attention, more skilled talent, etc” which seems like it burns timelines
And, sure, misuse risks (which in practice might end up being a subset of the second bullet point, but not necessarily so). But in reality, LLM-based misuse risks probably don’t end up being x-risks, unless biology turns out to be so shockingly easy that a (relatively) dumb system can come up with something that gets ~everyone in one go.