“By doing this chip thing, Sam Altman is likely to make takeoff faster (on the margin) than it otherwise would be.”
It makes takeoff faster, but is not necessary a fast takeoff. A fast takeoff is a specific science fiction scenario that happens many times faster than human beings can respond.
Let’s look at it with numbers:
Suppose you actually need 80 H100s per AGI. Suppose 2 generations of Moore’s law happen before AGI releases, so the number drops to 20.
Suppose they are 25k per card. so $500,000 per “person-equivalent”, though that person works 24⁄7 instead of at most 996, and can load multiple models.
If annual spending on chip production in 5 years is 1000 billion instead of 528 billion now, and 90%! goes to solely AI chips, that’s 1.8 million “person equivalents” added to the workforce each year. (I am pretending all the other chips are free)
If we assume at least 30% is reinvested into AI training more efficient models, and also correct for the duty cycle, that’s 2.94 million people added to the workforce per year. (Exponential gains are also slow, if you need 2-3 years per doubling thats not helping much especially since many of the instances aren’t being reinvested)
I think this is what Sam means when he said “It will change the world much less than we all think and it will change jobs much less than we all think”.
Update:
Suppose instead the 7T investment happens. How much does this speed up production?
Let’s assume we get 30 percent ROI on 7T. 7T has paid for more chip making equipment, more silicon production plants, better automation of these (near future low hanging fruit only!)
And let’s assume we are paying cost for the chips, and also assume the net cost is double. So it’s $3000 (current estimates) to build a GPU, and $3000 of support ICs to use it.
So 2.1 trillion USD each year in ICs, and every $120k is another AGI instance.
That’s like adding another 572 million people per year.
Hmm. Yeah that’s kind of a singularity,. ICs stops being a limiting factor and then obviously you are adding another China to the Earth every 2 years. Especially if you factor in exponential growth. Or how you functionally have 245 million test dummies/automated AI researchers each year...
Update: well it’s kind of a fast takeoff actually.
That’s not a fast takeoff, that’s a slow takeoff, and it’s not an impulse.
A fast takeoff would be a scenario where say, enough GPUs to host 1 billion extra humans (or the cognitive equivalent) was already “out in the world”, and also most of the hardware was already in clusters that can host big AI models, and it was secured poorly and monitored poorly. Some of the original AI timeline calculations were for when $1000 of compute would equal a human brain. Such a world would be at risk for a fast takeoff, where every random computer in a gas station can host an AGI.
I do actually think $7T is enough that it would materially accelerate Moore’s law, since “production gets more efficient over time” style laws tend to be be functions of “quantity produced” not of time.
In a world where we’re currently spending ~$600B/year on semiconductors, spending a few billion (current largest AI training runs) is insignificant, but if Sam really does manage to spend $7T/5 years, that would be basically tripling our semiconductor capacity.
There might also be negative feedback loops, because when you try to spend a large amount of money quickly you tend to do so less efficiently, so I doubt Moore’s law would literally triple. But if you thought (as Kurtzweil predicts) AGI will arrive circa 2035 based on Moore’s law alone, an investment of this (frankly ridiculous) scale reducing that time from 10 years down to 5 is conceivable.
It makes takeoff faster, but is not necessary a fast takeoff. A fast takeoff is a specific science fiction scenario that happens many times faster than human beings can respond.
Let’s look at it with numbers:
Suppose you actually need 80 H100s per AGI. Suppose 2 generations of Moore’s law happen before AGI releases, so the number drops to 20.
Suppose they are 25k per card. so $500,000 per “person-equivalent”, though that person works 24⁄7 instead of at most 996, and can load multiple models.
If annual spending on chip production in 5 years is 1000 billion instead of 528 billion now, and 90%! goes to solely AI chips, that’s 1.8 million “person equivalents” added to the workforce each year. (I am pretending all the other chips are free)
If we assume at least 30% is reinvested into AI training more efficient models, and also correct for the duty cycle, that’s 2.94 million people added to the workforce per year. (Exponential gains are also slow, if you need 2-3 years per doubling thats not helping much especially since many of the instances aren’t being reinvested)
I think this is what Sam means when he said “It will change the world much less than we all think and it will change jobs much less than we all think”.
Update:
Suppose instead the 7T investment happens. How much does this speed up production?
Let’s assume we get 30 percent ROI on 7T. 7T has paid for more chip making equipment, more silicon production plants, better automation of these (near future low hanging fruit only!)
And let’s assume we are paying cost for the chips, and also assume the net cost is double. So it’s $3000 (current estimates) to build a GPU, and $3000 of support ICs to use it.
So 2.1 trillion USD each year in ICs, and every $120k is another AGI instance.
That’s like adding another 572 million people per year.
Hmm. Yeah that’s kind of a singularity,. ICs stops being a limiting factor and then obviously you are adding another China to the Earth every 2 years. Especially if you factor in exponential growth. Or how you functionally have 245 million test dummies/automated AI researchers each year...
Update: well it’s kind of a fast takeoff actually.
That’s not a fast takeoff, that’s a slow takeoff, and it’s not an impulse.
A fast takeoff would be a scenario where say, enough GPUs to host 1 billion extra humans (or the cognitive equivalent) was already “out in the world”, and also most of the hardware was already in clusters that can host big AI models, and it was secured poorly and monitored poorly. Some of the original AI timeline calculations were for when $1000 of compute would equal a human brain. Such a world would be at risk for a fast takeoff, where every random computer in a gas station can host an AGI.
I do actually think $7T is enough that it would materially accelerate Moore’s law, since “production gets more efficient over time” style laws tend to be be functions of “quantity produced” not of time.
In a world where we’re currently spending ~$600B/year on semiconductors, spending a few billion (current largest AI training runs) is insignificant, but if Sam really does manage to spend $7T/5 years, that would be basically tripling our semiconductor capacity.
There might also be negative feedback loops, because when you try to spend a large amount of money quickly you tend to do so less efficiently, so I doubt Moore’s law would literally triple. But if you thought (as Kurtzweil predicts) AGI will arrive circa 2035 based on Moore’s law alone, an investment of this (frankly ridiculous) scale reducing that time from 10 years down to 5 is conceivable.