I disagree with the last two paragraphs. First, global nuclear war implies destruction of civilized society and bunkers can do very little to mitigate this at scale. Global supply chains and especially food production are the important facor. To restructure the food production and transportation of an entire country in the situation after nuclear war, AGI would have to come up with biotechnology bordering on magic from our point of view.
Even if building bunkers was a good idea, it’s questionable if that’s an area where AGI helps a lot compared to many other areas. Same for ICBMs: I don’t see how AGI changes the defensive/offensive calculation much.
To use the opium wars scenario: AGI enables a high degree of social control and influence. My expectation is that one party having a decisive AI advantage (implying also a wealth advantage) in such a situation may not need to use violence at all. Rather, it may be feasible to gain enough political influence to achieve most goals (including auch a mundane goal as making people and government tolerate the trade of drugs).
Hi Herb. I think the crux here is you are not interpreting the first sentence of the second to the last paragraph the way I am.
AGI smart enough to perform basic industrial tasks
I mean all industrial tasks, it’s a general system and capable of learning when it makes a mistake. all industrial tasks means all tasks required to build robots, which means all tasks required to build sensors and gearboxes and wiring harnesses and milled parts and motors, which means all tasks required to build microchips and metal ingots and sensors...all the way down the supply chain to base mining and deployment of solar panels.
Generality means all these tasks can be handled by (separate isolated instances of) one system which is benefiting from having initially mined all of human knowledge, like currently demonstrated systems.
This means that bunkers do work—there are exponential numbers of robots. An enemy with 1000 nuclear warheads would be facing a country that potentially can have every square kilometer covered with surface factories. Auto-deduplication would be possible—it would be possible to pay a small inefficiency cost and not have any one step of the supply chain concentrated in any one location across a country’s territory. And any damage can be repaired simply by ordering the manufacture of more radiation resistant robots to clear the ruble, then construction machines come and rebuild everything that was destroyed by emplacing prefab modules built by other factories.
Food obviously comes from indoor hydroponics, which are just another factory made module.
If you interpret it this way, does your disagreement remain?
If you doubt this is possible, can you explain, with technical details, why this form of generality is not possible in the near future? If you believe it is not possible, how do you explain current demonstrated generality?
The additional delta on LLMs is you have trained on all the video in the world, which means the AI system has knowledge about the general policies humans use when facing tool using tasks, and then after that you have refined the AI systems with many thousands of hours of RL training on actual industrial tasks, first in a simulation, then in the real world.
For that path, it takes AI that’s capable enough for all industrial (and non-industrial) tasks. But you also need all the physical plant (both the factories and the compute power to distribute to the tasks) that the AI uses to perform these industrial tasks.
I think it’s closer to 20 than 5 that the capabilities will be developed, possibly longer until the knowledge/techniques for the necessary manufacturing variants can be adapted to non-human production. And it’s easy to underestimate how long it takes to just build stuff, even if automated.
It’s not clear it’s POSSIBLE to convert enough stuff without breaking humanity badly enough that they revolt and destroy most things. Whether that kills everyone, reverts the world to the bronze age, or actually gets control of the AI is deeply hard to predict. It does seem clear that converting that much matter won’t be quick.
THAT is a crux. whether any component of it is exponential or logistical is VERY hard to know until you get close to the inflection. Absent “sufficiently advanced technology” like general-purpose nanotech (able to mine and refine, or convert existing materials into robots & factories in very short time), there is a limit to how parallel the building of the AI-friendly world can be, and a limit to how fast it can convert.
How severe do you think the logistics growth penalties are? I kinda mentally imagine a world where all desert and similar type land is covered in solar. Deeper mines than humans normally dig are supplying the minerals for further production. Many mines are underwater. The limit at that point is environment, you have exhausted the available land for more energy acquisition and are limited in what you can do safely without damaging the biosphere.
Somewhere around that point you shift to lunar factories which are in an exponential growth phase until the lunar surface is covered.
Basically I don’t see the penalties being relevant. There’s enough production to break geopolitical power deadlocks, and enough for a world of “everyone gets their needs and most luxury wants met”, assuming approximately 10 billion humans. The fact that further expansion may slow down isn’t relevant on a human scale.
Do you mean “when can we distinguish exponential from logistical curve”? I dunno, but I do know that many things which look exponential turn out to slow down after a finite (and small) number of doublings.
No I mean what I typed. Try my toy model, factories driven by AGI expanding across the earth or Moon. A logistical growth curve explicitly applies a penalty that scales with scale. When do you think this matters and by how much?
If say at lunar 50 percent the penalty is 10 percent, you have a case of basically exponential growth.
I agree all of these things are possible and expect such capabilities to develop eventually. I also strongly agree with your premise that having more advanced AI can be a big geopolitical advantage, which means arms races are an issue. However, 5-20 years is not very long. It may be enough to have human level AGI, I don’t expect such an AGI will enable feeding an entire country on hydroponics in the event of global nuclear war.
In any case, that’s not even relevant to my point, which is that, while AI does enable nuclear bunkers, defending against ICBMs and hydroponics, in the short term it enables other things a lot more, including things that matter geopolitically. For a country with a large advantage in AI capabilities pursuing geopolitical goals, it seems a bad choice to use nuclear weapons or to take precautions against attack using such weapons and be better off in the aftermath.
Rather, I expect the main geopolitically relevant advantages of AI superiority to be economic and political power, which gives advantage both domestically (ability to organize) as well as for influencing geopolitical rivals. I think resorting to military power (let alone nuclear war) will not be the best use of AI superiority. Economic power would arise from increased productivity due to better coordination, as well as the ability to surveil the population. Political power abroad would arise from the economic power, as well as from collecting data about citizens and using it for predicting their sentiments, as well as propaganda. AI superiority strongly benefits from having meaningful data about the world and other actors, as well as good economy and stable supply chains. These things go out the window in a war. I also expect war to be a lot less politically viable than using the other advantages of AI, which matters.
5-20 years is to the date of the first general model that can be asked to do most robotics tasks and it has a decent chance to accomplish it zero shot in real world. And for the rest, the backend simulator learns from unexpected outcomes, the model trains on the updated simulator, and eventually succeeds in the real world as well.
It is also incremental, once the model can do a task at all in the real world, the simulator continues to update and in training the model continues to learn policies that perform well on the updated sim, thus increasing real world performance until it is close to the maximum possible performance given the goal heuristic and hardware limitations.
Once said model exists, exponential growth is inevitable but I am not claiming instant hydroponics or anything else.
Also note that the exponential growth may have a doubling time on the order of months to years, this is because of payback delays. (Every power generator has to pay for the energy used to build the generator first, with solar this is kinda slow, every factory has to first pay for the machine time used to build all the machines in the factory, etc)
So it only becomes crazy once the base value being doubled is large.
As for the rest: I agree, economic superiority is what you want in the immediate future. I am just saying “don’t build ASI or we nuke!” threats have to be dealt with and in the long term, “we refuse to build ASI and we feel safe with our nuclear arsenal” is a losing strategy.
I disagree with the last two paragraphs. First, global nuclear war implies destruction of civilized society and bunkers can do very little to mitigate this at scale. Global supply chains and especially food production are the important facor. To restructure the food production and transportation of an entire country in the situation after nuclear war, AGI would have to come up with biotechnology bordering on magic from our point of view.
Even if building bunkers was a good idea, it’s questionable if that’s an area where AGI helps a lot compared to many other areas. Same for ICBMs: I don’t see how AGI changes the defensive/offensive calculation much.
To use the opium wars scenario: AGI enables a high degree of social control and influence. My expectation is that one party having a decisive AI advantage (implying also a wealth advantage) in such a situation may not need to use violence at all. Rather, it may be feasible to gain enough political influence to achieve most goals (including auch a mundane goal as making people and government tolerate the trade of drugs).
Hi Herb. I think the crux here is you are not interpreting the first sentence of the second to the last paragraph the way I am.
AGI smart enough to perform basic industrial tasks
I mean all industrial tasks, it’s a general system and capable of learning when it makes a mistake.
all industrial tasks means all tasks required to build robots, which means all tasks required to build sensors and gearboxes and wiring harnesses and milled parts and motors, which means all tasks required to build microchips and metal ingots and sensors...all the way down the supply chain to base mining and deployment of solar panels.
Generality means all these tasks can be handled by (separate isolated instances of) one system which is benefiting from having initially mined all of human knowledge, like currently demonstrated systems.
This means that bunkers do work—there are exponential numbers of robots. An enemy with 1000 nuclear warheads would be facing a country that potentially can have every square kilometer covered with surface factories. Auto-deduplication would be possible—it would be possible to pay a small inefficiency cost and not have any one step of the supply chain concentrated in any one location across a country’s territory. And any damage can be repaired simply by ordering the manufacture of more radiation resistant robots to clear the ruble, then construction machines come and rebuild everything that was destroyed by emplacing prefab modules built by other factories.
Food obviously comes from indoor hydroponics, which are just another factory made module.
If you interpret it this way, does your disagreement remain?
If you doubt this is possible, can you explain, with technical details, why this form of generality is not possible in the near future? If you believe it is not possible, how do you explain current demonstrated generality?
The additional delta on LLMs is you have trained on all the video in the world, which means the AI system has knowledge about the general policies humans use when facing tool using tasks, and then after that you have refined the AI systems with many thousands of hours of RL training on actual industrial tasks, first in a simulation, then in the real world.
Near future means 5-20 years.
For that path, it takes AI that’s capable enough for all industrial (and non-industrial) tasks. But you also need all the physical plant (both the factories and the compute power to distribute to the tasks) that the AI uses to perform these industrial tasks.
I think it’s closer to 20 than 5 that the capabilities will be developed, possibly longer until the knowledge/techniques for the necessary manufacturing variants can be adapted to non-human production. And it’s easy to underestimate how long it takes to just build stuff, even if automated.
It’s not clear it’s POSSIBLE to convert enough stuff without breaking humanity badly enough that they revolt and destroy most things. Whether that kills everyone, reverts the world to the bronze age, or actually gets control of the AI is deeply hard to predict. It does seem clear that converting that much matter won’t be quick.
It’s exponential. You’re correct in the first years, badly off near the end.
THAT is a crux. whether any component of it is exponential or logistical is VERY hard to know until you get close to the inflection. Absent “sufficiently advanced technology” like general-purpose nanotech (able to mine and refine, or convert existing materials into robots & factories in very short time), there is a limit to how parallel the building of the AI-friendly world can be, and a limit to how fast it can convert.
How severe do you think the logistics growth penalties are? I kinda mentally imagine a world where all desert and similar type land is covered in solar. Deeper mines than humans normally dig are supplying the minerals for further production. Many mines are underwater. The limit at that point is environment, you have exhausted the available land for more energy acquisition and are limited in what you can do safely without damaging the biosphere.
Somewhere around that point you shift to lunar factories which are in an exponential growth phase until the lunar surface is covered.
Basically I don’t see the penalties being relevant. There’s enough production to break geopolitical power deadlocks, and enough for a world of “everyone gets their needs and most luxury wants met”, assuming approximately 10 billion humans. The fact that further expansion may slow down isn’t relevant on a human scale.
Do you mean “when can we distinguish exponential from logistical curve”? I dunno, but I do know that many things which look exponential turn out to slow down after a finite (and small) number of doublings.
No I mean what I typed. Try my toy model, factories driven by AGI expanding across the earth or Moon. A logistical growth curve explicitly applies a penalty that scales with scale. When do you think this matters and by how much?
If say at lunar 50 percent the penalty is 10 percent, you have a case of basically exponential growth.
I mean, that sounds like it would already absolutely fuck up most ecosystems and thus life support.
I agree all of these things are possible and expect such capabilities to develop eventually. I also strongly agree with your premise that having more advanced AI can be a big geopolitical advantage, which means arms races are an issue. However, 5-20 years is not very long. It may be enough to have human level AGI, I don’t expect such an AGI will enable feeding an entire country on hydroponics in the event of global nuclear war.
In any case, that’s not even relevant to my point, which is that, while AI does enable nuclear bunkers, defending against ICBMs and hydroponics, in the short term it enables other things a lot more, including things that matter geopolitically. For a country with a large advantage in AI capabilities pursuing geopolitical goals, it seems a bad choice to use nuclear weapons or to take precautions against attack using such weapons and be better off in the aftermath.
Rather, I expect the main geopolitically relevant advantages of AI superiority to be economic and political power, which gives advantage both domestically (ability to organize) as well as for influencing geopolitical rivals. I think resorting to military power (let alone nuclear war) will not be the best use of AI superiority. Economic power would arise from increased productivity due to better coordination, as well as the ability to surveil the population. Political power abroad would arise from the economic power, as well as from collecting data about citizens and using it for predicting their sentiments, as well as propaganda. AI superiority strongly benefits from having meaningful data about the world and other actors, as well as good economy and stable supply chains. These things go out the window in a war. I also expect war to be a lot less politically viable than using the other advantages of AI, which matters.
5-20 years is to the date of the first general model that can be asked to do most robotics tasks and it has a decent chance to accomplish it zero shot in real world. And for the rest, the backend simulator learns from unexpected outcomes, the model trains on the updated simulator, and eventually succeeds in the real world as well.
It is also incremental, once the model can do a task at all in the real world, the simulator continues to update and in training the model continues to learn policies that perform well on the updated sim, thus increasing real world performance until it is close to the maximum possible performance given the goal heuristic and hardware limitations.
Once said model exists, exponential growth is inevitable but I am not claiming instant hydroponics or anything else.
Also note that the exponential growth may have a doubling time on the order of months to years, this is because of payback delays. (Every power generator has to pay for the energy used to build the generator first, with solar this is kinda slow, every factory has to first pay for the machine time used to build all the machines in the factory, etc)
So it only becomes crazy once the base value being doubled is large.
As for the rest: I agree, economic superiority is what you want in the immediate future. I am just saying “don’t build ASI or we nuke!” threats have to be dealt with and in the long term, “we refuse to build ASI and we feel safe with our nuclear arsenal” is a losing strategy.