I think the key here is ‘substantially’. That’s a standard of evidence which must be shown to apply to the uncensored LLM in question. I think it’s unclear if current uncensored LLMs would meet this level. I do think that if GPT-4 were to be released as an open source model, and then subsequently fine-tuned to be uncensored, that it would be sufficiently capable to meet the requirement of ‘substantially lowering the barrier of entry for non-experts’.
Do you know who would be deciding on orders like this one? Some specialized department in the USG, whatever judge that happens to hear the case, or something else?
I do not know. I can say that I’m glad they are taking these risks seriously. The low screening security on DNA synthesis orders has been making me nervous for years, ever since I learned the nitty gritty details while I was working on engineering viruses in the lab to manipulate brains of mammals for neuroscience experiments back in grad school. Allowing anonymous people to order custom synthetic genetic sequences over the internet without screening is just making it too easy to do bad things.
Do you think we need to ban open source LLMs to avoid catastrophic biorisk? I’m wondering if there are less costly ways of achieving the same goal. Mandatory DNA synthesis screening is a good start. It seems that today there are no known pathogens which would cause a pandemic, and therefore the key thing to regulate is biological design tools which could help you design a new pandemic pathogen. Would these risk mitigations, combined with better pandemic defenses via AI, counter the risk posed by open source LLMs?
I think that in the long term, we can make it safe to have open source LLMs, once there are better protections in place. By long term, I mean, I would advocate for not releasing stronger open source LLMs for probably the next ten years or so. Or until a really solid monitoring system is in place, if that happens sooner. We’ve made a mistake by publishing too much research openly, with tiny pieces of dangerous information scattered across thousands of papers. Almost nobody has time and skill sufficient to read and understand all that, or even a significant fraction. But models can, and so a model that can put the pieces together and deliver them in a convenient summary is dangerous because the pieces are there.
Why do you believe it’s, on the whole, a ‘mistake’ instead of beneficial?
I can think of numerous benefits, especially in the long term.
e.g. drawing the serious attention of decision makers who might have otherwise believed it to be a bunch of hooey, and ignored the whole topic.
e.g. discouraging certain groups from trying to ‘win’ in a geopolitical contest, by rushing to create a ‘super’-GPT, as they now know their margin of advantage is not so large anymore.
Oh, I meant that the mistake was publishing too much information about how to create a deadly pandemic. No, I agree that the AI stuff is a tricky call with arguments to be made for both sides. I’m pretty pleased with how responsibly the top labs have been handling it, compared to how it might have gone.
Edit: I do think that there is some future line, across which AI academic publishing would be unequivocally bad. I also think slowing down AI progress in general would be a good thing.
Edit: I do think that there is some future line, across which AI academic publishing would be unequivocally bad. I also think slowing down AI progress in general would be a good thing.
Okay, I guess my question still applies?
For example, it might be that letting it progress without restriction has more upsides then slowing it down.
An example of something I would be strongly against anyone publishing at this point in history is an algorithmic advance which drastically lowered compute costs for an equivalent level of capabilities, or substantially improved hazardous capabilities (without tradeoffs) such as situationally-aware strategic reasoning or effective autonomous planning and action over long time scales. I think those specific capability deficits are keeping the world safe from a lot of possible bad things.
I think… maybe I see the world and humanity’s existence on it, as a more fragile state of affairs than other people do. I wish I could answer you more thoroughly.
I think the key here is ‘substantially’. That’s a standard of evidence which must be shown to apply to the uncensored LLM in question. I think it’s unclear if current uncensored LLMs would meet this level. I do think that if GPT-4 were to be released as an open source model, and then subsequently fine-tuned to be uncensored, that it would be sufficiently capable to meet the requirement of ‘substantially lowering the barrier of entry for non-experts’.
Do you know who would be deciding on orders like this one? Some specialized department in the USG, whatever judge that happens to hear the case, or something else?
I do not know. I can say that I’m glad they are taking these risks seriously. The low screening security on DNA synthesis orders has been making me nervous for years, ever since I learned the nitty gritty details while I was working on engineering viruses in the lab to manipulate brains of mammals for neuroscience experiments back in grad school. Allowing anonymous people to order custom synthetic genetic sequences over the internet without screening is just making it too easy to do bad things.
Do you think we need to ban open source LLMs to avoid catastrophic biorisk? I’m wondering if there are less costly ways of achieving the same goal. Mandatory DNA synthesis screening is a good start. It seems that today there are no known pathogens which would cause a pandemic, and therefore the key thing to regulate is biological design tools which could help you design a new pandemic pathogen. Would these risk mitigations, combined with better pandemic defenses via AI, counter the risk posed by open source LLMs?
I think that in the long term, we can make it safe to have open source LLMs, once there are better protections in place. By long term, I mean, I would advocate for not releasing stronger open source LLMs for probably the next ten years or so. Or until a really solid monitoring system is in place, if that happens sooner. We’ve made a mistake by publishing too much research openly, with tiny pieces of dangerous information scattered across thousands of papers. Almost nobody has time and skill sufficient to read and understand all that, or even a significant fraction. But models can, and so a model that can put the pieces together and deliver them in a convenient summary is dangerous because the pieces are there.
Why do you believe it’s, on the whole, a ‘mistake’ instead of beneficial?
I can think of numerous benefits, especially in the long term.
e.g. drawing the serious attention of decision makers who might have otherwise believed it to be a bunch of hooey, and ignored the whole topic.
e.g. discouraging certain groups from trying to ‘win’ in a geopolitical contest, by rushing to create a ‘super’-GPT, as they now know their margin of advantage is not so large anymore.
Oh, I meant that the mistake was publishing too much information about how to create a deadly pandemic. No, I agree that the AI stuff is a tricky call with arguments to be made for both sides. I’m pretty pleased with how responsibly the top labs have been handling it, compared to how it might have gone.
Edit: I do think that there is some future line, across which AI academic publishing would be unequivocally bad. I also think slowing down AI progress in general would be a good thing.
Okay, I guess my question still applies?
For example, it might be that letting it progress without restriction has more upsides then slowing it down.
An example of something I would be strongly against anyone publishing at this point in history is an algorithmic advance which drastically lowered compute costs for an equivalent level of capabilities, or substantially improved hazardous capabilities (without tradeoffs) such as situationally-aware strategic reasoning or effective autonomous planning and action over long time scales. I think those specific capability deficits are keeping the world safe from a lot of possible bad things.
Yes it’s clear these are your views, Why do you believe so?
I think… maybe I see the world and humanity’s existence on it, as a more fragile state of affairs than other people do. I wish I could answer you more thoroughly.
https://www.lesswrong.com/posts/uPi2YppTEnzKG3nXD/nathan-helm-burger-s-shortform?commentId=qmrrKminnwh75mpn5
Not sure, but maybe the new AI institute they’re setting up as a result