If the returns k are >1 -- i.e. obtaining one unit of ‘intelligence’ makes you better able to increase your intelligence by more than one unit, then there is an exponential growth in intelligence and an intelligence explosion. If returns are <1, then each unit of intelligence becomes harder and harder to obtain and there is a smooth exponentialy slow convergence to some maximum intelligence level—an intelligence fizzle. If returns are exactly 1 then there is a steady linear growth in intelligence—an intelligence combustion. Of course sustained returns of exactly 1 are vanishingly unlikely in practice but it is possible that the average returns are 1 over some relevant range[1].
Another problem with this model, is that it’s highly likely that returns to intelligence vary across different cognitive domains. To a first approximation, the cognitive domains relevant to us are:
Very subhuman
Subhuman
Near human
Par human
Peak human
Superhuman
Strongly superhuman
I see no compelling reason to apriori expect returns to intelligence to behave smoothly across the aforementioned domains, instead of being described by different curves in different domains. At least I expect that will be true for some task/problem domain of interest.
Altogether, I’m very dissatisfied with “Intelligence Explosion Microeconomics” and it seems very spherical cow esque.
Yes definitely. Pretty much the main regions of interest to us are from Par-human up. Returns are almost definitely not consistent across scales. But what really matters for Xrisk is whether they are positive or negative around current or near-future ML models—i.e. can existing models or AGIs we create in the next few decades self improve to super intelligence or not?
Another problem with this model, is that it’s highly likely that returns to intelligence vary across different cognitive domains. To a first approximation, the cognitive domains relevant to us are:
Very subhuman
Subhuman
Near human
Par human
Peak human
Superhuman
Strongly superhuman
I see no compelling reason to apriori expect returns to intelligence to behave smoothly across the aforementioned domains, instead of being described by different curves in different domains. At least I expect that will be true for some task/problem domain of interest.
Altogether, I’m very dissatisfied with “Intelligence Explosion Microeconomics” and it seems very spherical cow esque.
Yes definitely. Pretty much the main regions of interest to us are from Par-human up. Returns are almost definitely not consistent across scales. But what really matters for Xrisk is whether they are positive or negative around current or near-future ML models—i.e. can existing models or AGIs we create in the next few decades self improve to super intelligence or not?
I’m curious what you think about my post expressing scepticism of the relevance of recursive self improvement to the deep learning paradigm.