Electricity is ~$0.1 per kWh for industry in Texas. Running a H100 for a full year costs ~$1800 at that price. I’m not sure how much depreciation is for an H100, but 20% seems reasonable. If a H100 is $40,000, and a year of depreciation is $8000, then you would be losing $800 a year if you just had 10% idle time.
So… maybe? But my guess is that natural gas power plants are just cheaper and more reliable—a few cloudy weeks out of a year would easily shift the equation in favor of natural gas. No idea how power cycling affects depreciation. The AI industry people aren’t talking much about solar or wind, and they would be if they thought it was more cost effective.
I don’t think there will actually be an electricity shortage due to AI training—I just think industry is lobbying state and federal lawmakers very very hard to make these power plants ASAP. I think it is quite likely that various regulations will be cut though and those natural gas plants will go up faster than the 2-3 years figure I gave.
The AI industry people aren’t talking much about solar or wind, and they would be if they thought it was more cost effective.
I don’t see them talking about natural gas either, but nuclear or even fusion, which seems like an indication that whatever’s driving their choice of what to talk about, it isn’t short-term cost-effectiveness.
I was going to make a similar comment, but related to OPEX vs CAPEX for the H100 If the H100 consumes 700 W, and we assume 2KW and costs $30,000 CAPEX
Then at $1800 per year, running costs are < 5% of CAPEX. For electricity the data center just wants to buy from the grid, its only if they are so big they are forced to have new connections that they can’t do that. Given electricity is worth so much more than $0.1 to them they would just like to compete on the national market for the spot price.
To me that gives a big incentive to have many small datacenters (maybe not possible for training, but perhaps for inference?)
If they can’t do that, then we need to assume they are so large there is no/limited grid connection available. Then they would build the fastest elec first. That is solar + battery, then nat gas to fill in the gaps + perhaps even mobile generators before the gas can be built to cover the days with no sun?
I can’t see any credible scenario where nuke makes a difference. You will have AGI first, or not that much growth in power demand.
This scenario also seems to depend on a slow takeoff happening in 0-2 years − 2+ million GPU need to be valuable enough but not TAI?
If you pair solar with compressed air energy storage, you can inexpensively (unlike chemical batteries) get to around 75% utilization of your AI chips (several days of storage), but I’m not sure if that’s enough, so natural gas would be good for the other ~25% (windpower is also anticorrelated with solar both diurnally and seasonally, but you might not have good resources nearby).
Electricity is ~$0.1 per kWh for industry in Texas. Running a H100 for a full year costs ~$1800 at that price. I’m not sure how much depreciation is for an H100, but 20% seems reasonable. If a H100 is $40,000, and a year of depreciation is $8000, then you would be losing $800 a year if you just had 10% idle time.
So… maybe? But my guess is that natural gas power plants are just cheaper and more reliable—a few cloudy weeks out of a year would easily shift the equation in favor of natural gas. No idea how power cycling affects depreciation. The AI industry people aren’t talking much about solar or wind, and they would be if they thought it was more cost effective.
I don’t think there will actually be an electricity shortage due to AI training—I just think industry is lobbying state and federal lawmakers very very hard to make these power plants ASAP. I think it is quite likely that various regulations will be cut though and those natural gas plants will go up faster than the 2-3 years figure I gave.
I don’t see them talking about natural gas either, but nuclear or even fusion, which seems like an indication that whatever’s driving their choice of what to talk about, it isn’t short-term cost-effectiveness.
I was going to make a similar comment, but related to OPEX vs CAPEX for the H100
If the H100 consumes 700 W, and we assume 2KW and costs $30,000 CAPEX
Then at $1800 per year, running costs are < 5% of CAPEX.
For electricity the data center just wants to buy from the grid, its only if they are so big they are forced to have new connections that they can’t do that. Given electricity is worth so much more than $0.1 to them they would just like to compete on the national market for the spot price.
To me that gives a big incentive to have many small datacenters (maybe not possible for training, but perhaps for inference?)
If they can’t do that, then we need to assume they are so large there is no/limited grid connection available. Then they would build the fastest elec first. That is solar + battery, then nat gas to fill in the gaps + perhaps even mobile generators before the gas can be built to cover the days with no sun?
I can’t see any credible scenario where nuke makes a difference. You will have AGI first, or not that much growth in power demand.
This scenario also seems to depend on a slow takeoff happening in 0-2 years − 2+ million GPU need to be valuable enough but not TAI?
If you pair solar with compressed air energy storage, you can inexpensively (unlike chemical batteries) get to around 75% utilization of your AI chips (several days of storage), but I’m not sure if that’s enough, so natural gas would be good for the other ~25% (windpower is also anticorrelated with solar both diurnally and seasonally, but you might not have good resources nearby).