I discuss some of that in [this comment] in reply to Steven Byrnes. I agree electricity is cheap, and discuss that. But electricity is not free, and still becomes a constraint.
In the end of the article I discuss/estimate near future brain-scale AGI requiring 1000 GPUs for 1000 brain size agents in parallel, using roughly 1MW total or 1KW per agent instance. That works out to about $2,000/yr per agent for the power&cooling cost. Or if we just estimate directly based on vast.ai prices it’s more like $5,000/yr per agent total for hardware rental (including power costs). The rental price using enterprise GPUs is at least 4x as much, so more like $20,000/yr per agent. So the potential economic advantage is not yet multiple OOM. It’s actually more like little to no advantage for low-end robotic labor, or perhaps 1 OOM advantage for programmers/researchers/ec. But if we had AGI today GPU prices would just skyrocket to arbitrage that advantage, at least until foundries could ramp up GPU production.
So anyway given some bound/estimate for power cost per agent, this does allow us to roughly bound the total amount of AGI compute near term achievable, as both world power production and foundry output is difficult to ramp up rapidly.
$2,000/yr per agent is nothing, when we are talking about hypothetical AGI. This seems to be evidence against your claim that energy is a taut constraint.
Sure, the actual price of compute would be more, because of the hardware and facilities etc. But that doesn’t change the bottom line that energy is not a taut constraint.
Maybe you are saying that in the future energy will become a taut constraint because we can’t make chips significantly more energy efficient but we can make them significantly cheaper in every other way, so energy will become the dominant part of the cost of compute?
Energy is always an engineering constraint: it’s a primary constraint on Moore’s Law, and thus also a primary limiter on a fast takeoff with GPUs (because world power supply isn’t enough to support net ANN compute much larger than current brain population net compute).
But again I already indicated it’s probably not a ‘taut constraint’ on early AGI in terms of economic cost—at least in my model of likely requirements for early not-smarter-than-human AGI.
Also yes additionally longer term we can expect energy to become a larger fraction of economic cost—through some combination of more efficient chip production, or just the slowing of moore’s law itself (which implies chips holding value for much longer, thus reducing the dominant hardware depreciation component of rental costs)
I discuss some of that in [this comment] in reply to Steven Byrnes. I agree electricity is cheap, and discuss that. But electricity is not free, and still becomes a constraint.
In the end of the article I discuss/estimate near future brain-scale AGI requiring 1000 GPUs for 1000 brain size agents in parallel, using roughly 1MW total or 1KW per agent instance. That works out to about $2,000/yr per agent for the power&cooling cost. Or if we just estimate directly based on vast.ai prices it’s more like $5,000/yr per agent total for hardware rental (including power costs). The rental price using enterprise GPUs is at least 4x as much, so more like $20,000/yr per agent. So the potential economic advantage is not yet multiple OOM. It’s actually more like little to no advantage for low-end robotic labor, or perhaps 1 OOM advantage for programmers/researchers/ec. But if we had AGI today GPU prices would just skyrocket to arbitrage that advantage, at least until foundries could ramp up GPU production.
So anyway given some bound/estimate for power cost per agent, this does allow us to roughly bound the total amount of AGI compute near term achievable, as both world power production and foundry output is difficult to ramp up rapidly.
$2,000/yr per agent is nothing, when we are talking about hypothetical AGI. This seems to be evidence against your claim that energy is a taut constraint.
Sure, the actual price of compute would be more, because of the hardware and facilities etc. But that doesn’t change the bottom line that energy is not a taut constraint.
Maybe you are saying that in the future energy will become a taut constraint because we can’t make chips significantly more energy efficient but we can make them significantly cheaper in every other way, so energy will become the dominant part of the cost of compute?
Energy is always an engineering constraint: it’s a primary constraint on Moore’s Law, and thus also a primary limiter on a fast takeoff with GPUs (because world power supply isn’t enough to support net ANN compute much larger than current brain population net compute).
But again I already indicated it’s probably not a ‘taut constraint’ on early AGI in terms of economic cost—at least in my model of likely requirements for early not-smarter-than-human AGI.
Also yes additionally longer term we can expect energy to become a larger fraction of economic cost—through some combination of more efficient chip production, or just the slowing of moore’s law itself (which implies chips holding value for much longer, thus reducing the dominant hardware depreciation component of rental costs)
Or maybe you aren’t saying energy is a taut constraint at all? It sure sounded like you did but maybe I misinterpreted you.