I have a moment so I’ll summarize some of my thinking here for the sake of discussion. It’s a bit more fleshed out at the link. I don’t say much about AI capabilities directly since that’s better-covered by others.
In the first broad scenario, AI contributes to normal economic growth and social change. Key drivers limit the size and term of bets industry players are willing to make: [1A] the frontier is deeply specialized into a particular paradigm, [1B] AI research and production depend on lumpy capital projects, [1C] firms have difficulty capturing profits from training and running large models, and [1D] returns from scaling new methods are uncertain.
In the second, AI drives economic growth, but bottlenecks in the rest of the economy limit its transformative potential. Key drivers relate to how much AI can accelerate the non-AI inputs to AI research and production: [2A] limited generality of capabilities, [2B] limited headroom in capabilities, [2C] serial physical bottlenecks, and [2D] difficulty substituting theory for experiment.
Indicators (hypothetical observations that would lead us to expect these drivers to have more influence) include:
Specialized methods, hardware, and infrastructure dominate those for general-purpose computing in AI. (+1A)
Training and deployment use different specialized infrastructure. (+1A, +1B)
Generic progress in the semiconductor industry only marginally advances AI hardware. (+1A)
Conversely, advances in AI hardware are difficult to repurpose for the rest of the semiconductor industry. (+1A)
Specialized hardware production is always scaling to meet demand. (+1A)
Research progress is driven chiefly by what we learn from the largest and most expensive projects. (+1B, +1D, +2D)
Open-source models and second-tier competitors lag the state of the art by around one large training run. (+1C, +1D)
Small models can be cheaply trained once expensive models are proven, achieving results nearly as good at much lower cost. (+1C, +1D)
Progress in capabilities at the frontier originates from small-scale experiments or theoretical developments several years prior, brought to scale at some expense and risk of failure, as is the status quo in hardware. (+1D, +2D)
Progress in AI is very uneven or even nonmonotonic across domains—each faces different bottlenecks that are addressed individually. (+2A)
Apparent technical wins are left on the table, because they only affect a fraction of performance and impose adoption costs on the entire system. (+1A, +2B, +2C)
The semiconductor industry continues to fragment. (+2B)
More broadly, semiconductor industry trends, particularly in cost and time (exponential and with diminishing returns), continue. (+2A, +2B, +2C)
Semiconductor industry roadmaps are stable and continue to extend 10–15 years out. (+2C, +2D)
Negative indicators (indicating that these drivers have less influence) include
The same hardware pushes the performance frontier not only for AI training and inference but also for high-performance computing more traditionally. (–1A)
Emerging hardware technologies like exotic materials for neuromorphic computing successfully attach themselves as adjuncts to general-purpose silicon processes, giving themselves a self-sustaining route to scale. (–1A, –2B)
Training runs use as much compute as they can afford; there’s always a marginal stock of hardware that can be repurposed for AI as soon as AI applications become slightly more economical. (–1A, –1B)
AI industry players engage in pre-competitive collaboration, for example setting interoperability standards or jointly funding the training of a shared foundation model. (–1B)
Alternatively, early industry leaders establish monopolistic advantages over the rest of the field. (–1B, –1C)
AI training becomes more continuous, rather than something one “pulls the trigger” on. Models see large benefits from “online” training as they’re being used, as compared with progress from model to model. (–1B)
Old models have staying power, perhaps being cheaper to run or tailored to niche applications. (–1C)
Advances in AI at scale originate from experiments or theory with relatively little trouble applying them at scale within a few years, as is the status quo in software. (–1D, –2D)
The leading edge features different AI paradigms or significant churn between methods. (–1A, –1D)
The same general AI is broadly deployed in different domains, industry coordination is strong (through monopoly or standardization), and upgrades hit many domains together. (–2A)
Evidence builds that a beyond-silicon computing paradigm could deliver performance beyond the roadmap for the next 15 years of silicon. (–2B)
New semiconductor consortia arise, for example producing consensus chiplet or heterogeneous integration standards, making it easier for a fragmented industry to continue to build on one another’s work. (–1A, –2C)
Spatial/robotics problems in particular—proprioception, navigation, manipulation—are solved. (–2C)
Fusion power becomes practical. (–2C)
AI is applied to experimental design and yields markedly better results than modern methods. (–2B, –2D)
AI research progress is driven by theory. (–1D, –2D)
Breakthroughs make microscopic physical simulation orders of magnitude easier. Molecular dynamics, density functional theory, quantum simulation, and other foundational methods are accelerated by AI while also greatly improving accuracy. (–2B, –2C, –2D)
I have a moment so I’ll summarize some of my thinking here for the sake of discussion. It’s a bit more fleshed out at the link. I don’t say much about AI capabilities directly since that’s better-covered by others.
In the first broad scenario, AI contributes to normal economic growth and social change. Key drivers limit the size and term of bets industry players are willing to make: [1A] the frontier is deeply specialized into a particular paradigm, [1B] AI research and production depend on lumpy capital projects, [1C] firms have difficulty capturing profits from training and running large models, and [1D] returns from scaling new methods are uncertain.
In the second, AI drives economic growth, but bottlenecks in the rest of the economy limit its transformative potential. Key drivers relate to how much AI can accelerate the non-AI inputs to AI research and production: [2A] limited generality of capabilities, [2B] limited headroom in capabilities, [2C] serial physical bottlenecks, and [2D] difficulty substituting theory for experiment.
Indicators (hypothetical observations that would lead us to expect these drivers to have more influence) include:
Specialized methods, hardware, and infrastructure dominate those for general-purpose computing in AI. (+1A)
Training and deployment use different specialized infrastructure. (+1A, +1B)
Generic progress in the semiconductor industry only marginally advances AI hardware. (+1A)
Conversely, advances in AI hardware are difficult to repurpose for the rest of the semiconductor industry. (+1A)
Specialized hardware production is always scaling to meet demand. (+1A)
Research progress is driven chiefly by what we learn from the largest and most expensive projects. (+1B, +1D, +2D)
Open-source models and second-tier competitors lag the state of the art by around one large training run. (+1C, +1D)
Small models can be cheaply trained once expensive models are proven, achieving results nearly as good at much lower cost. (+1C, +1D)
Progress in capabilities at the frontier originates from small-scale experiments or theoretical developments several years prior, brought to scale at some expense and risk of failure, as is the status quo in hardware. (+1D, +2D)
Progress in AI is very uneven or even nonmonotonic across domains—each faces different bottlenecks that are addressed individually. (+2A)
Apparent technical wins are left on the table, because they only affect a fraction of performance and impose adoption costs on the entire system. (+1A, +2B, +2C)
The semiconductor industry continues to fragment. (+2B)
More broadly, semiconductor industry trends, particularly in cost and time (exponential and with diminishing returns), continue. (+2A, +2B, +2C)
Semiconductor industry roadmaps are stable and continue to extend 10–15 years out. (+2C, +2D)
Negative indicators (indicating that these drivers have less influence) include
The same hardware pushes the performance frontier not only for AI training and inference but also for high-performance computing more traditionally. (–1A)
Emerging hardware technologies like exotic materials for neuromorphic computing successfully attach themselves as adjuncts to general-purpose silicon processes, giving themselves a self-sustaining route to scale. (–1A, –2B)
Training runs use as much compute as they can afford; there’s always a marginal stock of hardware that can be repurposed for AI as soon as AI applications become slightly more economical. (–1A, –1B)
AI industry players engage in pre-competitive collaboration, for example setting interoperability standards or jointly funding the training of a shared foundation model. (–1B)
Alternatively, early industry leaders establish monopolistic advantages over the rest of the field. (–1B, –1C)
AI training becomes more continuous, rather than something one “pulls the trigger” on. Models see large benefits from “online” training as they’re being used, as compared with progress from model to model. (–1B)
Old models have staying power, perhaps being cheaper to run or tailored to niche applications. (–1C)
Advances in AI at scale originate from experiments or theory with relatively little trouble applying them at scale within a few years, as is the status quo in software. (–1D, –2D)
The leading edge features different AI paradigms or significant churn between methods. (–1A, –1D)
The same general AI is broadly deployed in different domains, industry coordination is strong (through monopoly or standardization), and upgrades hit many domains together. (–2A)
Evidence builds that a beyond-silicon computing paradigm could deliver performance beyond the roadmap for the next 15 years of silicon. (–2B)
New semiconductor consortia arise, for example producing consensus chiplet or heterogeneous integration standards, making it easier for a fragmented industry to continue to build on one another’s work. (–1A, –2C)
Spatial/robotics problems in particular—proprioception, navigation, manipulation—are solved. (–2C)
Fusion power becomes practical. (–2C)
AI is applied to experimental design and yields markedly better results than modern methods. (–2B, –2D)
AI research progress is driven by theory. (–1D, –2D)
Breakthroughs make microscopic physical simulation orders of magnitude easier. Molecular dynamics, density functional theory, quantum simulation, and other foundational methods are accelerated by AI while also greatly improving accuracy. (–2B, –2C, –2D)