Asking people not to build AI is like asking them to give up a money machine, almost
We need everyone to agree to stop
There is no clear line. With an atom bomb, it is pretty well defined if you sent it or not. It’s much more vague with “did you do AI research?”
It’s pretty easy to notice if someone sent an atom bomb. Not so easy to notice if they researched AI
AI research is getting cheaper. Today only a few actors can do it, but notice, there are already open source versions of gpt-like models. How long could we hold it back?
Still, people are trying to do things in this direction, and I’m pretty sure that the situation is “try any direction that seems at all plausible”
Meta: There’s an AI Governance tag and a Regulation and AI Risk tag
My own (very limited) understanding is:
Asking people not to build AI is like asking them to give up a money machine, almost
We need everyone to agree to stop
There is no clear line. With an atom bomb, it is pretty well defined if you sent it or not. It’s much more vague with “did you do AI research?”
It’s pretty easy to notice if someone sent an atom bomb. Not so easy to notice if they researched AI
AI research is getting cheaper. Today only a few actors can do it, but notice, there are already open source versions of gpt-like models. How long could we hold it back?
Still, people are trying to do things in this direction, and I’m pretty sure that the situation is “try any direction that seems at all plausible”
Thanks, this is helpful!