Thank you for such a detailed write-up! I have to admit that I am teetering on the issue whether to ban or not to ban open-source LLMs and as I a co-founder of an AI-for-drug-design startup I had taken the increased biosecurity risk as probably the single most important consideration. So I think the conversation sparked up by your post is quite valuable.
That said, even if I consider all that you presented, I am still leaning towards banning powerful open-source LLMs, at least until we get much more information and most importantly before we establish other safeguards against global pandemics (like “airplane lavatory detectors” etc.).
First, I think there is definitely a big difference between having online information about actually making lethal pathogens, having the full assistance of an LLM makes a quite significant difference, especially for people starting from near zero.
Then, if I consider all of my ideas for what kind of damage could be done with all the new capabilities, especially combining LLMs with generative bioinformatic AIs… I think a lot of caution is surely warranted.
Ultimately, If you take the potential benefits of open-source LLMs over fine-tuned LLMs (not crazy significant in my opinion but of course we don’t have that data either), and compare to the risks posed by essentially removing all of the guardrails and safety measures everyone is working on in AI labs… I think at least waiting some time with open-sourcing is the right call now.
Thank you for such a detailed write-up! I have to admit that I am teetering on the issue whether to ban or not to ban open-source LLMs and as I a co-founder of an AI-for-drug-design startup I had taken the increased biosecurity risk as probably the single most important consideration. So I think the conversation sparked up by your post is quite valuable.
That said, even if I consider all that you presented, I am still leaning towards banning powerful open-source LLMs, at least until we get much more information and most importantly before we establish other safeguards against global pandemics (like “airplane lavatory detectors” etc.).
First, I think there is definitely a big difference between having online information about actually making lethal pathogens, having the full assistance of an LLM makes a quite significant difference, especially for people starting from near zero.
Then, if I consider all of my ideas for what kind of damage could be done with all the new capabilities, especially combining LLMs with generative bioinformatic AIs… I think a lot of caution is surely warranted.
Ultimately, If you take the potential benefits of open-source LLMs over fine-tuned LLMs (not crazy significant in my opinion but of course we don’t have that data either), and compare to the risks posed by essentially removing all of the guardrails and safety measures everyone is working on in AI labs… I think at least waiting some time with open-sourcing is the right call now.