I don’t particularly care that people are running GPT-3 code (except inasmuch as it makes ML more profitable), and don’t think it helps if we lose focus on what the actual ground-truth concerns are. I want to encourage analysis that gets at deeper similarities than this.
GPT-3 code does not pose an existential risk, and members of the public couldn’t stop it being an existential risk if it was by not using it to help run shell commands anyway, because, if nothing else, GPT-3, ChatGPT and Codex are all public. Beyond the fact GPT-3 is specifically not risky in this regard, it’d be a shame if people primarily took away ‘don’t run code from neural networks’, rather than something more sensible like ‘the more powerful models get, the more relevant their nth-order consequences become’. The model in the story used code output because it’s an especially convenient tool lying around, but it didn’t have to, because there are lots of ways text can influence the world. Code is just particularly quick, accessible, precise, and predictable.
Sure, I agree GPT-3 isn’t that kind of risk, so this is maybe 50% a joke. The other 50% is me saying: “If something like this exists, someone is going to run that code. Someone could very well build a tool that runs that code at the press of a button.”
Meanwhile, bored tech industry hackers:
“Show HN: Interact with the terminal in plain English using GPT-3”
https://news.ycombinator.com/item?id=34547015
I don’t particularly care that people are running GPT-3 code (except inasmuch as it makes ML more profitable), and don’t think it helps if we lose focus on what the actual ground-truth concerns are. I want to encourage analysis that gets at deeper similarities than this.
GPT-3 code does not pose an existential risk, and members of the public couldn’t stop it being an existential risk if it was by not using it to help run shell commands anyway, because, if nothing else, GPT-3, ChatGPT and Codex are all public. Beyond the fact GPT-3 is specifically not risky in this regard, it’d be a shame if people primarily took away ‘don’t run code from neural networks’, rather than something more sensible like ‘the more powerful models get, the more relevant their nth-order consequences become’. The model in the story used code output because it’s an especially convenient tool lying around, but it didn’t have to, because there are lots of ways text can influence the world. Code is just particularly quick, accessible, precise, and predictable.
Sure, I agree GPT-3 isn’t that kind of risk, so this is maybe 50% a joke. The other 50% is me saying: “If something like this exists, someone is going to run that code. Someone could very well build a tool that runs that code at the press of a button.”