When it comes to players that are open about the work they are doing I think Google and OpenAI might develop models that are more powerful than GPT-4 in the relatively near future.
If OpenAI develops GPT-5 a few months later it might mean that they make less profits for those months with ChatGPT and their API service. For Google it’s likely similar.
Other actors that might train a model that’s stronger than GPT-4 might be the NSA or Chinese companies. FLI seemed to have decided against encouraging Chinese companies to join by taking simple steps like publishing a Chinese version of the letter. The NSA is very unlikely to publically say anything about whether or not they are training a model and certainly not allow transparency into what models they are building as the letter calls for.
If I could make a move that signals virtue and doesn’t harm my business interests, why would I reject it?
Because someone prefers a climate where AI safety actions are targeted at producing AI safety instead of where those actions are targeted as virtue signals?
In an environment where most actions are taken in the name of virtue signaling it’s easy for all actions to be perceived as being about virtue signaling.
When it comes to players that are open about the work they are doing I think Google and OpenAI might develop models that are more powerful than GPT-4 in the relatively near future.
If OpenAI develops GPT-5 a few months later it might mean that they make less profits for those months with ChatGPT and their API service. For Google it’s likely similar.
Other actors that might train a model that’s stronger than GPT-4 might be the NSA or Chinese companies. FLI seemed to have decided against encouraging Chinese companies to join by taking simple steps like publishing a Chinese version of the letter. The NSA is very unlikely to publically say anything about whether or not they are training a model and certainly not allow transparency into what models they are building as the letter calls for.
Because someone prefers a climate where AI safety actions are targeted at producing AI safety instead of where those actions are targeted as virtue signals?
In an environment where most actions are taken in the name of virtue signaling it’s easy for all actions to be perceived as being about virtue signaling.