Mostly agree but point out that assessing size and speed of scientific progress one should compare AGI versus all of humanity not only individual humans.
The speedup I’m talking about is serial, and for purposes of scientific progress maybe only about 100,000-1,000,000 humans are relevant, possibly just 10,000 would do if all researchers are von Neumann level. This maps to hardware for inference of that many instances of AGI in parallel, which seems quite feasible if an AGI instance doesn’t need much more than an LLM. Learning doesn’t need to worry about latency, so that’s a weaker constraint than inference. (It’s an exploratory engineering sketch, so everything here is a plausible lower bound, not a prediction.)
The main bottleneck is capability of existing fabs, but if it’s overcome, manufacturing scale falls to the same method. Initial speed advantage should allow AGIs to figure out how to overcome it very quickly, possibly with the intermediate step of designing better chips for existing fabs to improve the advantage. Traditionally, the proposed method for overcoming the hardware/industry bottleneck is nanotech, but if it’s not feasible there is also macroscopic biotech, designing animal-like objects that grow exponentially as quickly as fruit flies and serve as non-precision parts of factories and as chemical plants, obviating the need to scale infrastructure to manufacture things like robot arms or buildings. Then, it’s a question of making use of this to produce compute and fusion, which is the step that could take up most of the physical time.
Mostly agree but point out that assessing size and speed of scientific progress one should compare AGI versus all of humanity not only individual humans.
The speedup I’m talking about is serial, and for purposes of scientific progress maybe only about 100,000-1,000,000 humans are relevant, possibly just 10,000 would do if all researchers are von Neumann level. This maps to hardware for inference of that many instances of AGI in parallel, which seems quite feasible if an AGI instance doesn’t need much more than an LLM. Learning doesn’t need to worry about latency, so that’s a weaker constraint than inference. (It’s an exploratory engineering sketch, so everything here is a plausible lower bound, not a prediction.)
The main bottleneck is capability of existing fabs, but if it’s overcome, manufacturing scale falls to the same method. Initial speed advantage should allow AGIs to figure out how to overcome it very quickly, possibly with the intermediate step of designing better chips for existing fabs to improve the advantage. Traditionally, the proposed method for overcoming the hardware/industry bottleneck is nanotech, but if it’s not feasible there is also macroscopic biotech, designing animal-like objects that grow exponentially as quickly as fruit flies and serve as non-precision parts of factories and as chemical plants, obviating the need to scale infrastructure to manufacture things like robot arms or buildings. Then, it’s a question of making use of this to produce compute and fusion, which is the step that could take up most of the physical time.