xAI has ambitions to compete with OpenAI and DeepMind, but I don’t feel like it has the same presence in the AI safety discourse. I don’t know anything about its attitude to safety, or how serious a competitor it is. Are there good reasons it doesn’t get talked about? Should we be paying it more attention?
I’ve asked similar questions before and heard a few things. I also have a few personal thoughts that I thought I’d share here unprompted. This topic is pretty relevant for me so I’d be interested in what specific claims in both categories people agree/disagree with.
Things I’ve heard:
There’s some skepticism about how well-positioned xAI actually is to compete with leading labs, because although they have a lot of capital and ability to fundraise, lots of the main bottlenecks right now can’t simply be solved by throwing more money at the problem. E.g. building infrastructure, securing power contracts, hiring top engineers, accessing huge amounts of data, and building on past work are all pretty limited by non-financial factors, and therefore the incumbents have lots of advantages. That being said, it’s placed alongside Meta and Google in the highest liquidity prediction market I could find about this asking which labs will be “top 3” in 2025.
There’s some optimism about their attitude to safety since Elon has been talking about catastrophic risks from AI in no uncertain terms for a long time. There’s also some optimism coming from the fact that he/xAI opted to appoint Dan Hendrycks as an advisor.
Personal thoughts:
I’m not that convinced that they will take safety seriously by default. Elon’s personal beliefs seem to be hard to pin down/constantly shifting, and honestly, he hasn’t seemed to be doing that well to me recently. He’s long had a belief that the SpaceX project is all about getting humanity off Earth before we kill ourselves, and I could see a similar attitude leading to the “build ASI asap to get us through the time of perils” approach that I know others at top AI labs have (if he doesn’t feel this way already).
I also think (~65%) it was a strategic blunder for Dan Hendrycks to take a public position there. If there’s anything I took away from the OpenAI meltdown, it’s a greater belief in something like “AI Safety realpolitik;” that is, when the chips are down, all that matters is who actually has the raw power. Fancy titles mean nothing, personal relationships mean nothing, heck, being a literal director of the organization means nothing, all that matters is where the money and infrastructure and talent is. So I don’t think the advisor position will mean much, and I do think it will terribly complicate CAIS’ efforts to appear neutral, lobby via their 501c4, etc. I have no special insight here so I hope I’m missing something, or that the position does lead to a positive influence on their safety practices that wouldn’t have been achieved by unofficial/ad-hoc advising.
I think most AI safety discourse is overly focused on the top 4 labs (OpenAI, Anthropic, Google, and Meta) and underfocused on international players, traditional big tech (Microsoft, Amazon, Apple, Samsung), and startups (especially those building high-risk systems like highly-technical domain specialists and agents). Similarly, I think xAI gets less attention than it should.
This should be good for training runs that could be said to cost $1 billion in cost of time (lasting a few months). And Dario Amodei is saying that this is the scale of today, for models that are not yet deployed. This puts xAI at 18 months behind, a difficult place to rebound from unless long-horizon task capable AI that can do many jobs (a commercially crucial threshold that is not quite AGI) is many more years away.
For some reason current labs are not running $10 billion training runs already, didn’t build the necessary datacenters immediately. It would take a million H100s and 1.5 gigawatts, supply issues seem likely. There is also a lot of engineering detail to iron out, so the scaling proceeds gradually.
But some of this might be risk aversion, unwillingness to waste capital where a slower pace makes a better use of it. As a new contender has no other choice, we’ll get to see if it’s possible to leapfrog scaling after all. And Musk has affinity with impossible deadlines (not necessarily with meeting them), so the experiment will at least be attempted.
xAI has ambitions to compete with OpenAI and DeepMind, but I don’t feel like it has the same presence in the AI safety discourse. I don’t know anything about its attitude to safety, or how serious a competitor it is. Are there good reasons it doesn’t get talked about? Should we be paying it more attention?
I’ve asked similar questions before and heard a few things. I also have a few personal thoughts that I thought I’d share here unprompted. This topic is pretty relevant for me so I’d be interested in what specific claims in both categories people agree/disagree with.
Things I’ve heard:
There’s some skepticism about how well-positioned xAI actually is to compete with leading labs, because although they have a lot of capital and ability to fundraise, lots of the main bottlenecks right now can’t simply be solved by throwing more money at the problem. E.g. building infrastructure, securing power contracts, hiring top engineers, accessing huge amounts of data, and building on past work are all pretty limited by non-financial factors, and therefore the incumbents have lots of advantages. That being said, it’s placed alongside Meta and Google in the highest liquidity prediction market I could find about this asking which labs will be “top 3” in 2025.
There’s some optimism about their attitude to safety since Elon has been talking about catastrophic risks from AI in no uncertain terms for a long time. There’s also some optimism coming from the fact that he/xAI opted to appoint Dan Hendrycks as an advisor.
Personal thoughts:
I’m not that convinced that they will take safety seriously by default. Elon’s personal beliefs seem to be hard to pin down/constantly shifting, and honestly, he hasn’t seemed to be doing that well to me recently. He’s long had a belief that the SpaceX project is all about getting humanity off Earth before we kill ourselves, and I could see a similar attitude leading to the “build ASI asap to get us through the time of perils” approach that I know others at top AI labs have (if he doesn’t feel this way already).
I also think (~65%) it was a strategic blunder for Dan Hendrycks to take a public position there. If there’s anything I took away from the OpenAI meltdown, it’s a greater belief in something like “AI Safety realpolitik;” that is, when the chips are down, all that matters is who actually has the raw power. Fancy titles mean nothing, personal relationships mean nothing, heck, being a literal director of the organization means nothing, all that matters is where the money and infrastructure and talent is. So I don’t think the advisor position will mean much, and I do think it will terribly complicate CAIS’ efforts to appear neutral, lobby via their 501c4, etc. I have no special insight here so I hope I’m missing something, or that the position does lead to a positive influence on their safety practices that wouldn’t have been achieved by unofficial/ad-hoc advising.
I think most AI safety discourse is overly focused on the top 4 labs (OpenAI, Anthropic, Google, and Meta) and underfocused on international players, traditional big tech (Microsoft, Amazon, Apple, Samsung), and startups (especially those building high-risk systems like highly-technical domain specialists and agents). Similarly, I think xAI gets less attention than it should.
A new Bloomberg article says xAI is building a datacenter in Memphis, planned to become operational by the end of 2025, mentioning a new-to-me detail that the datacenter targets 150 megawatts (more details on DCD). This means the scale of 100,000 GPUs or $4 billion in infrastructure, a bulk of its recently secured $6 billion from Series B.
This should be good for training runs that could be said to cost $1 billion in cost of time (lasting a few months). And Dario Amodei is saying that this is the scale of today, for models that are not yet deployed. This puts xAI at 18 months behind, a difficult place to rebound from unless long-horizon task capable AI that can do many jobs (a commercially crucial threshold that is not quite AGI) is many more years away.
It seems the 100K H100s for the Memphis datacenter can plausibly get online around the end of 2024, and planned release of Grok-3 gives additional indirect evidence this might be the case. While OpenAI might have started training in May on a cluster that might have 100K H100s as well. So I’m updating my previous guess of xAI being 18 months behind to them only being 7-9 months behind for the 100K H100s scale (above 4e26 FLOPs).
For some reason current labs are not running $10 billion training runs already, didn’t build the necessary datacenters immediately. It would take a million H100s and 1.5 gigawatts, supply issues seem likely. There is also a lot of engineering detail to iron out, so the scaling proceeds gradually.
But some of this might be risk aversion, unwillingness to waste capital where a slower pace makes a better use of it. As a new contender has no other choice, we’ll get to see if it’s possible to leapfrog scaling after all. And Musk has affinity with impossible deadlines (not necessarily with meeting them), so the experiment will at least be attempted.