I’ve asked similar questions before and heard a few things. I also have a few personal thoughts that I thought I’d share here unprompted. This topic is pretty relevant for me so I’d be interested in what specific claims in both categories people agree/disagree with.
Things I’ve heard:
There’s some skepticism about how well-positioned xAI actually is to compete with leading labs, because although they have a lot of capital and ability to fundraise, lots of the main bottlenecks right now can’t simply be solved by throwing more money at the problem. E.g. building infrastructure, securing power contracts, hiring top engineers, accessing huge amounts of data, and building on past work are all pretty limited by non-financial factors, and therefore the incumbents have lots of advantages. That being said, it’s placed alongside Meta and Google in the highest liquidity prediction market I could find about this asking which labs will be “top 3” in 2025.
There’s some optimism about their attitude to safety since Elon has been talking about catastrophic risks from AI in no uncertain terms for a long time. There’s also some optimism coming from the fact that he/xAI opted to appoint Dan Hendrycks as an advisor.
Personal thoughts:
I’m not that convinced that they will take safety seriously by default. Elon’s personal beliefs seem to be hard to pin down/constantly shifting, and honestly, he hasn’t seemed to be doing that well to me recently. He’s long had a belief that the SpaceX project is all about getting humanity off Earth before we kill ourselves, and I could see a similar attitude leading to the “build ASI asap to get us through the time of perils” approach that I know others at top AI labs have (if he doesn’t feel this way already).
I also think (~65%) it was a strategic blunder for Dan Hendrycks to take a public position there. If there’s anything I took away from the OpenAI meltdown, it’s a greater belief in something like “AI Safety realpolitik;” that is, when the chips are down, all that matters is who actually has the raw power. Fancy titles mean nothing, personal relationships mean nothing, heck, being a literal director of the organization means nothing, all that matters is where the money and infrastructure and talent is. So I don’t think the advisor position will mean much, and I do think it will terribly complicate CAIS’ efforts to appear neutral, lobby via their 501c4, etc. I have no special insight here so I hope I’m missing something, or that the position does lead to a positive influence on their safety practices that wouldn’t have been achieved by unofficial/ad-hoc advising.
I think most AI safety discourse is overly focused on the top 4 labs (OpenAI, Anthropic, Google, and Meta) and underfocused on international players, traditional big tech (Microsoft, Amazon, Apple, Samsung), and startups (especially those building high-risk systems like highly-technical domain specialists and agents). Similarly, I think xAI gets less attention than it should.
I’ve asked similar questions before and heard a few things. I also have a few personal thoughts that I thought I’d share here unprompted. This topic is pretty relevant for me so I’d be interested in what specific claims in both categories people agree/disagree with.
Things I’ve heard:
There’s some skepticism about how well-positioned xAI actually is to compete with leading labs, because although they have a lot of capital and ability to fundraise, lots of the main bottlenecks right now can’t simply be solved by throwing more money at the problem. E.g. building infrastructure, securing power contracts, hiring top engineers, accessing huge amounts of data, and building on past work are all pretty limited by non-financial factors, and therefore the incumbents have lots of advantages. That being said, it’s placed alongside Meta and Google in the highest liquidity prediction market I could find about this asking which labs will be “top 3” in 2025.
There’s some optimism about their attitude to safety since Elon has been talking about catastrophic risks from AI in no uncertain terms for a long time. There’s also some optimism coming from the fact that he/xAI opted to appoint Dan Hendrycks as an advisor.
Personal thoughts:
I’m not that convinced that they will take safety seriously by default. Elon’s personal beliefs seem to be hard to pin down/constantly shifting, and honestly, he hasn’t seemed to be doing that well to me recently. He’s long had a belief that the SpaceX project is all about getting humanity off Earth before we kill ourselves, and I could see a similar attitude leading to the “build ASI asap to get us through the time of perils” approach that I know others at top AI labs have (if he doesn’t feel this way already).
I also think (~65%) it was a strategic blunder for Dan Hendrycks to take a public position there. If there’s anything I took away from the OpenAI meltdown, it’s a greater belief in something like “AI Safety realpolitik;” that is, when the chips are down, all that matters is who actually has the raw power. Fancy titles mean nothing, personal relationships mean nothing, heck, being a literal director of the organization means nothing, all that matters is where the money and infrastructure and talent is. So I don’t think the advisor position will mean much, and I do think it will terribly complicate CAIS’ efforts to appear neutral, lobby via their 501c4, etc. I have no special insight here so I hope I’m missing something, or that the position does lead to a positive influence on their safety practices that wouldn’t have been achieved by unofficial/ad-hoc advising.
I think most AI safety discourse is overly focused on the top 4 labs (OpenAI, Anthropic, Google, and Meta) and underfocused on international players, traditional big tech (Microsoft, Amazon, Apple, Samsung), and startups (especially those building high-risk systems like highly-technical domain specialists and agents). Similarly, I think xAI gets less attention than it should.