Main concern right now is very much lab proliferation, ensuing coordination problems, and disagreements / adversarial communication / overall insane and polarized discourse.
Google Deepmind: They are older than OpenAI. They also have a safety team. They are very much aware of the arguments. I don’t know about Musk’s impact on them.
Anthropic: They split from OpenAI. To my best guess, they care about safety at least roughly as much as them. Many safety researchers have been quitting OpenAI to go work for Anthropic over the past few years.
xAI: Founded by Musk several years after he walked out from OpenAI. People working there have previously worked at other big labs. General consensus seems to be that their alignment plan (as least as explained by Elon) is quite confused.
SSI: Founded by Ilyia Sutskever after he walked out from OpenAI, which he did after participating in a failed effort to fire Sam Altman from OpenAI. Very much aware of the arguments.
Meta AI: To the best of my knowledge, aware of the arguments but very dismissive of them (at least at the upper management levels).
Mistral AI: I don’t know much but probably more or less the same or worse than Meta AI.
Chinese labs: No idea. I’ll have to look into this.
I am confident that there are relatively influential people within Deepmind and Anthropic who post here and/or on the Aligment Forum. I am unsure about people from other labs, as I am nothing more than a relatively well-read outsider.
Has Musk tried to convince the other AI companies to also worry about safety?
Main concern right now is very much lab proliferation, ensuing coordination problems, and disagreements / adversarial communication / overall insane and polarized discourse.
Google Deepmind: They are older than OpenAI. They also have a safety team. They are very much aware of the arguments. I don’t know about Musk’s impact on them.
Anthropic: They split from OpenAI. To my best guess, they care about safety at least roughly as much as them. Many safety researchers have been quitting OpenAI to go work for Anthropic over the past few years.
xAI: Founded by Musk several years after he walked out from OpenAI. People working there have previously worked at other big labs. General consensus seems to be that their alignment plan (as least as explained by Elon) is quite confused.
SSI: Founded by Ilyia Sutskever after he walked out from OpenAI, which he did after participating in a failed effort to fire Sam Altman from OpenAI. Very much aware of the arguments.
Meta AI: To the best of my knowledge, aware of the arguments but very dismissive of them (at least at the upper management levels).
Mistral AI: I don’t know much but probably more or less the same or worse than Meta AI.
Chinese labs: No idea. I’ll have to look into this.
I am confident that there are relatively influential people within Deepmind and Anthropic who post here and/or on the Aligment Forum. I am unsure about people from other labs, as I am nothing more than a relatively well-read outsider.