In the USA: Musk’s xAI announced Grok to the world two weeks ago, after two months of training. Meta disbanded its Responsible AI team. Google’s Gemini is reportedly to be released in early 2024. OpenAI has confused the world with its dramatic leadership spasm, but GPT-5 is on the way. Google and Amazon have promised billions to Anthropic.
In Europe, France’s Mistral and Germany’s Aleph Alpha are trying to keep the most powerful AI models unregulated. China has had regulations for generative AI since August, but is definitely aiming to catch up to America. Russia has GigaChat and SistemmaGPT, the UAE has Falcon. I think none of these are at GPT-4′s level, but surely some of them can get there in a year or two.
Very few players in this competitive landscape talk about AI as something that might rule or replace the human race. Despite the regulatory diplomacy that also came to life this year, the political and economic elites of the world are on track to push AI across the threshold of superintelligence, without any realistic sense of the consequences.
I continue to think that the best chance of a positive outcome, lies with AI safety research (and perhaps realistic analysis of what superintelligence might do with the world) that is in the public domain. All these competing power centers may keep the details of their AI capabilities research secret, but public AI safety research has a chance of being noticed and used by any of them.
What’s the situation?
In the USA: Musk’s xAI announced Grok to the world two weeks ago, after two months of training. Meta disbanded its Responsible AI team. Google’s Gemini is reportedly to be released in early 2024. OpenAI has confused the world with its dramatic leadership spasm, but GPT-5 is on the way. Google and Amazon have promised billions to Anthropic.
In Europe, France’s Mistral and Germany’s Aleph Alpha are trying to keep the most powerful AI models unregulated. China has had regulations for generative AI since August, but is definitely aiming to catch up to America. Russia has GigaChat and SistemmaGPT, the UAE has Falcon. I think none of these are at GPT-4′s level, but surely some of them can get there in a year or two.
Very few players in this competitive landscape talk about AI as something that might rule or replace the human race. Despite the regulatory diplomacy that also came to life this year, the political and economic elites of the world are on track to push AI across the threshold of superintelligence, without any realistic sense of the consequences.
I continue to think that the best chance of a positive outcome, lies with AI safety research (and perhaps realistic analysis of what superintelligence might do with the world) that is in the public domain. All these competing power centers may keep the details of their AI capabilities research secret, but public AI safety research has a chance of being noticed and used by any of them.