(Spoilers for Interstellar)
I sat next to person on a flight a few weeks ago who, upon talking about physics, said she thought the movie Interstellar was “amazing” and “scientific”. I agreed with her, thinking she was talking about the realistic black hole simulations. No, she was talking about scenes where the main character reaches back in time as a ghost to influence his daughter.
This person was a first-year Ph.D. student in medicine.
So yes, even when science fiction is done relatively carefully, some people will take as “scientific” the parts which have been stretched for better storytelling.
I’m happy that this was done before release. However … I’m still left wondering “how many prompts did they try?” In practice, the first AI self-replicating escape is not likely to be a model working alone on a server, but a model carefully and iteratively prompted, with overall strategy provided by a malicious human programmer. Also, one wonders what will happen once the base architecture is in the training set. One need only recognize that there is a lot of profit to be made (and more cheaply) by having the AI identify and exploit zero-days to generate and spread malware (say, while shorting the stock of a target company). Perhaps GPT-4 is not yet capable enough to find or exploit zero-days. I suppose we will find out soon enough.
Note that this creates a strong argument for never open-sourcing the model once a certain level of capability is reached: a GPT-N with enough hints about its own structure will be able to capably write itself.