I made an attempt to model intelligence explosion dynamics in this post, by attempting to make the very oversimplified exponential-returns-to-exponentially-increasing-intelligence model used by Bostrom and Yudkowsky slightly less oversimplified.
This post tries to build on a simplified mathematical model of takeoff which was first put forward by Eliezer Yudkowsky and then refined by Bostrom in Superintelligence, modifying it to account for the different assumptions behind continuous, fast progress as opposed to discontinuous progress. As far as I can tell, few people have touched these sorts of simple models since the early 2010’s, and no-one has tried to formalize how newer notions of continuous takeoff fit into them. I find that it is surprisingly easy to accommodate continuous progress and that the results are intuitive and fit with what has already been said qualitatively about continuous progress.
The page includes python code for the model.
This post doesn’t capture all the views of takeoff—in particular it doesn’t capture the non-hyperbolic faster growth mode scenario, where marginal intelligence improvements are exponentially increasingly difficult, and therefore we get a (continuous or discontinuous switch to a) new exponential growth mode rather than runaway hyperbolic growth.
But I think that by modifying the f(I) function that determines how RSI capability varies with intelligence we can incorporate such views.
In the context of the exponential model given in the post that would correspond to an f(I) function where
f(I)=1I(1+e−d(I(t)−IAGI))
which would result in a continuous (determined by size of d) switch to a single faster exponential growth mode
But I think the model still roughly captures the intuition behind scenarios that involve either a continuous or a discontinuous step to an intelligence explosion.
Given the model assumptions, we see how the different scenarios look in practice:
If we plot potential AI capability over time, we can see how no new growth mode (brown) vs a new growth mode (all the rest), the presence of an intelligence explosion (red and orange) vs not (green and purple), and the presence of a discontinuity (red and purple) vs not (orange and green) affect the takeoff trajectory.
This also depends on what you mean by capability, correct? Today we have computers that are millions of times faster but only logarithmically more capable. No matter the topic you get diminishing returns with more capability.
Moreover, if you talk about the AI building ‘rubber hits the road’ real equipment to do things—real actual utility versus ability to think about things—the AI is up against things like hard limits to thermodynamics and heat dispersion and so on.
So while the actual real world results could be immense—swarms of robotic systems tearing down all the solid matter in our solar system—the machine is still very much bounded by what physics will permit, and so the graph is only vertical for a brief period of time. (the period between ‘technology marginally better than present day’ and ‘can tear down planets with the click of a button’)
Yes, its very oversimplified—in this case ‘capability’ just refers to whatever enables RSI, and we assume that it’s a single dimension. Of course, it isn’t, but we assume that the capability can be modelled this way as a very rough approximation.
Physical limits are another thing the model doesn’t cover—you’re right to point out that on the intelligence explosion/full RSI scenarios the graph goes vertical only for a time until some limit is hit
Update to ‘Modelling Continuous Progress’
I made an attempt to model intelligence explosion dynamics in this post, by attempting to make the very oversimplified exponential-returns-to-exponentially-increasing-intelligence model used by Bostrom and Yudkowsky slightly less oversimplified.
The page includes python code for the model.
This post doesn’t capture all the views of takeoff—in particular it doesn’t capture the non-hyperbolic faster growth mode scenario, where marginal intelligence improvements are exponentially increasingly difficult, and therefore we get a (continuous or discontinuous switch to a) new exponential growth mode rather than runaway hyperbolic growth.
But I think that by modifying the f(I) function that determines how RSI capability varies with intelligence we can incorporate such views.
In the context of the exponential model given in the post that would correspond to an f(I) function where
f(I)=1I(1+e−d(I(t)−IAGI))which would result in a continuous (determined by size of d) switch to a single faster exponential growth mode
But I think the model still roughly captures the intuition behind scenarios that involve either a continuous or a discontinuous step to an intelligence explosion.
Given the model assumptions, we see how the different scenarios look in practice:
If we plot potential AI capability over time, we can see how no new growth mode (brown) vs a new growth mode (all the rest), the presence of an intelligence explosion (red and orange) vs not (green and purple), and the presence of a discontinuity (red and purple) vs not (orange and green) affect the takeoff trajectory.
This also depends on what you mean by capability, correct? Today we have computers that are millions of times faster but only logarithmically more capable. No matter the topic you get diminishing returns with more capability.
Moreover, if you talk about the AI building ‘rubber hits the road’ real equipment to do things—real actual utility versus ability to think about things—the AI is up against things like hard limits to thermodynamics and heat dispersion and so on.
So while the actual real world results could be immense—swarms of robotic systems tearing down all the solid matter in our solar system—the machine is still very much bounded by what physics will permit, and so the graph is only vertical for a brief period of time. (the period between ‘technology marginally better than present day’ and ‘can tear down planets with the click of a button’)
Yes, its very oversimplified—in this case ‘capability’ just refers to whatever enables RSI, and we assume that it’s a single dimension. Of course, it isn’t, but we assume that the capability can be modelled this way as a very rough approximation.
Physical limits are another thing the model doesn’t cover—you’re right to point out that on the intelligence explosion/full RSI scenarios the graph goes vertical only for a time until some limit is hit