I expect it to go at maybe about twice the speed of China initially
Ok this sounds like a fair estimate. so 30% annual growth rate, or 2.4 years. I estimated 2 years, which would be faster growth. One concrete reason to think that this is a fair estimate is that AI controlled robots will work 24 hours a day. China’s brutal 996 work schedule, which is 72 hours a week, is still only 43% duty cycle—a robot can work at at least a 95% duty cycle. (other 5% is swapping parts as parts of the robot fail)
So trying to engage with the ‘foom’ idea, you develop in the lab some ‘ASI’. That ASI is ‘directing’ these things? Why? Why do humans trust the outputs?
I was thinking the ‘foom’ idea was ok, the ASI robs lies and steals and cons. It finds some way to scam humans into doing work for it. And ok, every 2.4 years it’s resources are doubling. (there might be some early ‘low hanging fruit’, for example people have suggested an ASI might be able to extract money from financial markets with better trades. This will saturate)
Meanwhile, other humans use the same technology that ASI unlocks, and make controlled ASI—models you prompt without context or memory, weaker or distilled ASI models that are specialized. Then they simultaneously, under human direction, invest hundreds of billions of dollars into robot factories and double the equipment available to humans. Say the humans are less efficient, well, how much less efficient? Is there doubling time 3 years? 5?
There is a set of numbers where the ASI win, but so long as the initial starting point for the humans is a large enough resource multiplier, and the humans eventually notice the actions of the rebel ASI, it’s going to usually be human victory. For example:
ASI steals 1 billion dollars and covertly starts doubling. Doubling time is 2 years, 10 years later, they have 32 billion in resources.
Humans do some funding rounds and get 100 billion dollars and legally start doubling. Doubling time is 4 years. 10 years later, they have 400 billion in resources.
Humans spot the unregistered factories belonging to the ASI via IR or serendipity. The first round of AI wars starts...
ASI needs a pretty large utility modifier to win with such a resource disparity, especially as humans can keep feeding the same context for each battle to the ASI models the humans have, stripping off the context bits that let the ASI know when it’s being Ender’s Gamed*, and getting solutions to the tactical scenarios.
I was understanding that ‘foom’ meant somewhat more science fiction takeoff speeds, such as doubling times of a week. That would be an issue if humans do not also have the ability to order their resources doubled on a weekly basis.
Ok this sounds like a fair estimate. so 30% annual growth rate, or 2.4 years. I estimated 2 years, which would be faster growth. One concrete reason to think that this is a fair estimate is that AI controlled robots will work 24 hours a day. China’s brutal 996 work schedule, which is 72 hours a week, is still only 43% duty cycle—a robot can work at at least a 95% duty cycle. (other 5% is swapping parts as parts of the robot fail)
So trying to engage with the ‘foom’ idea, you develop in the lab some ‘ASI’. That ASI is ‘directing’ these things? Why? Why do humans trust the outputs?
I was thinking the ‘foom’ idea was ok, the ASI robs lies and steals and cons. It finds some way to scam humans into doing work for it. And ok, every 2.4 years it’s resources are doubling. (there might be some early ‘low hanging fruit’, for example people have suggested an ASI might be able to extract money from financial markets with better trades. This will saturate)
Meanwhile, other humans use the same technology that ASI unlocks, and make controlled ASI—models you prompt without context or memory, weaker or distilled ASI models that are specialized. Then they simultaneously, under human direction, invest hundreds of billions of dollars into robot factories and double the equipment available to humans. Say the humans are less efficient, well, how much less efficient? Is there doubling time 3 years? 5?
There is a set of numbers where the ASI win, but so long as the initial starting point for the humans is a large enough resource multiplier, and the humans eventually notice the actions of the rebel ASI, it’s going to usually be human victory. For example:
ASI steals 1 billion dollars and covertly starts doubling. Doubling time is 2 years, 10 years later, they have 32 billion in resources.
Humans do some funding rounds and get 100 billion dollars and legally start doubling. Doubling time is 4 years. 10 years later, they have 400 billion in resources.
Humans spot the unregistered factories belonging to the ASI via IR or serendipity. The first round of AI wars starts...
ASI needs a pretty large utility modifier to win with such a resource disparity, especially as humans can keep feeding the same context for each battle to the ASI models the humans have, stripping off the context bits that let the ASI know when it’s being Ender’s Gamed*, and getting solutions to the tactical scenarios.
I was understanding that ‘foom’ meant somewhat more science fiction takeoff speeds, such as doubling times of a week. That would be an issue if humans do not also have the ability to order their resources doubled on a weekly basis.
*c’mon give me credit for this turn of phrase