I’m happy that this was done before release. However … I’m still left wondering “how many prompts did they try?” In practice, the first AI self-replicating escape is not likely to be a model working alone on a server, but a model carefully and iteratively prompted, with overall strategy provided by a malicious human programmer. Also, one wonders what will happen once the base architecture is in the training set. One need only recognize that there is a lot of profit to be made (and more cheaply) by having the AI identify and exploit zero-days to generate and spread malware (say, while shorting the stock of a target company). Perhaps GPT-4 is not yet capable enough to find or exploit zero-days. I suppose we will find out soon enough.
Note that this creates a strong argument for never open-sourcing the model once a certain level of capability is reached: a GPT-N with enough hints about its own structure will be able to capably write itself.
I’m happy that this was done before release. However … I’m still left wondering “how many prompts did they try?” In practice, the first AI self-replicating escape is not likely to be a model working alone on a server, but a model carefully and iteratively prompted, with overall strategy provided by a malicious human programmer. Also, one wonders what will happen once the base architecture is in the training set. One need only recognize that there is a lot of profit to be made (and more cheaply) by having the AI identify and exploit zero-days to generate and spread malware (say, while shorting the stock of a target company). Perhaps GPT-4 is not yet capable enough to find or exploit zero-days. I suppose we will find out soon enough.
Note that this creates a strong argument for never open-sourcing the model once a certain level of capability is reached: a GPT-N with enough hints about its own structure will be able to capably write itself.