Is the FLI Moratorium already (as weak as it is) too ambitious to be adopted?
There are many issues besides “too ambitious” for a proposal not to be adopted.
If I would imagine that I’m in charge of OpenAI or Google and I could make a move that harms my business interests while doing nothing for safety beyond being a virtue signal, why would I adopt it?
If my lab is “unlikely to develop anything more powerful than GPT-4 in the next six months anyway” then it is also unlikely that it harms my business interests.
If I could make a move that signals virtue and doesn’t harm my business interests, why would I reject it?
When it comes to players that are open about the work they are doing I think Google and OpenAI might develop models that are more powerful than GPT-4 in the relatively near future.
If OpenAI develops GPT-5 a few months later it might mean that they make less profits for those months with ChatGPT and their API service. For Google it’s likely similar.
Other actors that might train a model that’s stronger than GPT-4 might be the NSA or Chinese companies. FLI seemed to have decided against encouraging Chinese companies to join by taking simple steps like publishing a Chinese version of the letter. The NSA is very unlikely to publically say anything about whether or not they are training a model and certainly not allow transparency into what models they are building as the letter calls for.
If I could make a move that signals virtue and doesn’t harm my business interests, why would I reject it?
Because someone prefers a climate where AI safety actions are targeted at producing AI safety instead of where those actions are targeted as virtue signals?
In an environment where most actions are taken in the name of virtue signaling it’s easy for all actions to be perceived as being about virtue signaling.
There are many issues besides “too ambitious” for a proposal not to be adopted.
If I would imagine that I’m in charge of OpenAI or Google and I could make a move that harms my business interests while doing nothing for safety beyond being a virtue signal, why would I adopt it?
If my lab is “unlikely to develop anything more powerful than GPT-4 in the next six months anyway” then it is also unlikely that it harms my business interests.
If I could make a move that signals virtue and doesn’t harm my business interests, why would I reject it?
When it comes to players that are open about the work they are doing I think Google and OpenAI might develop models that are more powerful than GPT-4 in the relatively near future.
If OpenAI develops GPT-5 a few months later it might mean that they make less profits for those months with ChatGPT and their API service. For Google it’s likely similar.
Other actors that might train a model that’s stronger than GPT-4 might be the NSA or Chinese companies. FLI seemed to have decided against encouraging Chinese companies to join by taking simple steps like publishing a Chinese version of the letter. The NSA is very unlikely to publically say anything about whether or not they are training a model and certainly not allow transparency into what models they are building as the letter calls for.
Because someone prefers a climate where AI safety actions are targeted at producing AI safety instead of where those actions are targeted as virtue signals?
In an environment where most actions are taken in the name of virtue signaling it’s easy for all actions to be perceived as being about virtue signaling.