Simulation of technological progress (work in progress)
I’ve made a model/simulation of technological progress, that you can download and run on your laptop.
My goal is to learn something about intelligence explosions, takeoff speeds, discontinuities, human-level milestones, AGI vs. tools, bottlenecks, or something else. I’ll be happy if I can learn something about even one of these things, even if it’s just a minor update and not anything close to conclusive.
So far I’ve just got a very basic version of the model built. It works, but it’s currently unclear what—if anything—we can learn from it. I need to think more about whether the assumptions it uses are realistic, and I need to explore the space of parameter settings more systematically.
I’m posting it here to get feedback on the basic idea, and maybe also on the model so far if people want to download it and play around. I’m particularly interested in evidence/arguments about whether or not this is a productive use of my time, and arguments that some hidden assumption my model makes is problematically determining the results.
If you want to try out the model yourself, download NetLogo here and then open the file in this folder.
How the model works:
The main part of the model consists of research projects, which are lists of various types of task. Civilization completes tasks to complete research projects, and when projects get finished, civilization gets a “bonus” which allows it to do new types of task, and to do some old types faster.
The projects, the lists of tasks needed to complete them, the speeds at which civilization can do the tasks, and the bonuses granted by completing projects are all randomly generated, typically using exponential distributions and often with parameters you can change in the UI. Other important parameters can be changed in the UI also, such as how many task types are “off limits” for technological improvement, and how many task types are “temporarily off limits” until some specified level of technology is reached.
As explained so far, the model represents better technology leading to more research directions (more types of task become available) and faster progress (civilization can do tasks in less time).
Projects are displayed as dots/stars which flicker as work is done on them. When they complete, they turn into big green circles. Their location in the display represents how difficult they are to complete: the x-axis encodes how many tasks are involved, and the y-axis encodes how many different kinds of tasks are involved. To the left of the main display is a graph that tracks a bunch of metrics I’ve deemed interesting, scaled so that they all have similar heights.
There are several kinds of diminishing returns and several kinds of increasing returns in the model.
Diminishing:
“Projects farther from the bottom left corner on the display require exponentially more tasks and exponentially more kinds of tasks.
Project bonuses work as follows: Pick a random bag of tasks, pick random speeds for each task, compare the results to the current state-of-the-art speeds on those tasks and update the state of the art accordingly. Thus, the faster civilization already is at completing a task, the more projects will need to be completed (on average) to improve speed at that task further.
Finally, there is the usual “low-hanging fruit” effect where the projects which can get done quickly do so, leaving harder and harder projects remaining.
Increasing:
The more projects you do, the more bonuses you get. This makes you faster at completing projects...
And opens up new projects to work on, some of which will be low-hanging fruit.
The model also has a simple module representing the “economic” side of things -- i.e. over time, civilization can work on a greater number of projects simultaneously, if you choose. I have a few different settings representing different scenarios:
“All projects all the time” represents a situation where science has loads of funding and/or excellent planning of research paths, so that the only constraints on finishing projects are whether and how fast the tasks involved can be done.
“100 doable projects” represents a situation with fixed research budget: 100 doable projects are being worked on at any given time.
Scaling effort with projectscompleted represents “learning by doing” where the more projects have been completed so far, the more effort is invested in doing more projects.
Scaling effort with machinetaskspeed represents a situation where how much effort is devoted to science is proportional to how advanced today’s tech is on average.
The “info” tab of the NetLogo file explains things in more detail, if you are interested.
What tends to happen when I run it:
The model tends to produce progress (specifically, in the metric of “projects completed—see the log plot) somewhere between exponential and superexponential. Sometimes it displays what appears to be a clear exponential trend (a very straight line on the log scale) that fairly rapidly transitions into a singularity (a vertical line on the log scale).
Interestingly, progress in the metric “% of tasks done faster thanks to research” is not typically exponential, much less singularity; it is usually a jumpy but more or less linear march from 0 to 100.
Sometimes progress stagnates, though I’ve only seen this happen extremely early on—I’ve never seen steady exponential growth followed by stagnation.
For a while it seemed that progress would typically shoot through the roof around the time that almost all tasks were doable & being improved. This is what Amdahl’s Law would predict, I think: Get rid of the last few bottlenecks and progress will soar. However, I now think that’s wrong; the growth still happens even if a substantial fraction of tasks are “off-limits,” and/or off-limits temporarily. I’m not sure what to think now, but after I give my head a rest I expect ideas will come.
The various parameter settings I’ve put into the model seem to have surprisingly little effect on all of the above. They affect how long everything takes but rarely do they affect the fundamental shape of the trajectory. In particular, removing the “effort feedback loop” entirely, by choosing “all projects all the time” or “100 doable projects” would (I predicted) slow down progress a lot, but in practice we still seem to get singularities. Of course, I haven’t systematically compared the results; this is just the vague impression I get from the handful of different runs I’ve done.
Doubts I have about the accuracy of the model & ideas for things to add
Most importantly, there are probably major flawed assumptions I’ve made in building this model that I haven’t realized yet. I don’t know what I don’t know.
I worry that the results depend in a brittle fashion on the probability distributions and ranges that I use to generate the projects. In other words, I’m worried that the results, though robust to the parameters I’ve put in the UI, are not robust to hidden assumptions that I made hastily.
Often if you can’t do a project one way, there is another path via which you can do it. Thus we can e.g. use brute force search in a computer + a small amount of thinking to replace a larger amount of a different kind of thinking. But in my model, all projects have a single list of tasks that need to be done to complete the project. Is this a problem?
Maybe I should try to build a version of this model that gets exponential growth followed by stagnation, or at least some sort of big s-curve? Maybe the reason I haven’t seen this behavior so far is that I haven’t put much effort into looking for it.
Ultimately I’d like to have a much richer economy in the model, with different factions buying and selling things as they separately advance up the tech tree. This sounds hard to implement in code, but maybe it’s easier than I think. Maybe economics can help me with this.
Currently I have to wait a minute or so each run, depending on settings (some runs happen in just a few seconds). This is my very first coding project so the code is probably atrocious and could run significantly faster if I optimized it. If I could run it many times, I’d learn how much the results vary from run to run due to randomness.
You might be producing some useful info, but mostly about whether an arbitrary system exibits unlimited exponential growth. If you got 1000 different programmers to each throw together some model of tech progress, some based on completing tasks, some based on extracting resources, some based on random differential equations ect, and see what proportion of them give exponential growth and then stagnation. Actually, there isn’t a scale on your model, so who can say if the running out of tasks, or stagnation are next year or in 100000 years. At best, you will be able to tell how strongly outside view priors should favor exp growth over growth and then decay. (Pure growth is clearly simpler, but how much simpler?)
Yeah, that’s sorta my hope. The model is too abstract and disconnected from real-world numbers to be able to predict things like “The singularity will happen in 2045” but maybe it can predict things like “If you’ve had exponential growth for a while, it is very unlikely a priori / outside-view that growth will slow, and in fact quite likely that it will accelerate dramatically. Unless you are literally running out of room to grow, i.e. hitting fundamental physical limits in almost all endeavors.”
This was an interesting post, it got me thinking a bit about the right way to represent “technology” in a mathematical model.
I think I have a pretty solid qualitative understanding of how technology changes impact economic production—constraints are the right representation for that. But it’s not clear how that feeds back into further technological development. What qualitative model structure captures the key aspects of recursive technological progress?
A few possible threads to pull on:
Throwing economic resources at research often yields technological progress, but what’s the distribution of progress yielded by this?
Some targeted, incremental research is aimed at small changes to parameters of production constraints—e.g. cutting the amount of some input required for some product by 10%. That sort of thing slots nicely into the constraints framework, and presumably throwing more resources at research will result in more incremental progress (though it’s not clear how quickly marginal returns decrease/increase with research investments).
There are often underlying constraints to technologies themselves—i.e. physical constraints. It feels like there should be an elegant way to represent these in production-space, via duality (i.e. constraints on production are dual to production, so constraints on the constraints should be in production space).
Related: in cases of “discrete” technological progress, it feels like there’s usually an underlying constraint on a broad class of technologies. So representing constraints-on-constraints is important to capturing jumps in progress.
If there are production constraints and constraints on the constraints, presumably we could go even more meta, but at the moment I can’t think of any useful meaning to higher meta-levels.
Very interesting :)
I suspect the model is making a hidden assumption about the lack of “special projects”; e.g. the model assumes there can’t be a single project that yields a bonus that makes all the other projects’ tasks instantly solvable?
Also, I’m not sure that the model allows us to distinguish between scenarios in which a major part of overall progress is very local (e.g. happens within a single company) and more Hansonian scenarios in which the contribution to progress is well distributed among many actors.
Yeah, I tried to build the model with certain criticisms of the intelligence explosion argument in mind—for example, the criticism that it assumes intelligence is a single thing rather than a diverse collection of skills, or the criticism that it assumes AGI will be a single thing rather than a diverse collection of more specific AI tools, or the criticism that it assumes takeoff will happen after human level but not before. My model makes no such assumptions, but it still gets intelligence explosion. I think this is an already somewhat interesting result, though not a major update for me since I didn’t put much credence in those objections anyway.
Currently the model just models civilization’s progress overall, so yeah it can’t distinguish between local vs. distributed takeoff. I’m hoping to change that in the future, but I’m not sure how yet.
Eyeballing the graphs you produced, it looks like the singularities you keep getting are hyperbolic growth, which we already have in real life (compare log(world GDP) to your graph of log(projects completed) - their shapes are almost identical).
So far as I can tell, what you’ve shown is that you almost always get a big speedup of hyperbolic growth as AI advances but without discontinuities, which is what the ‘continuous takeoff’ people like Christiano already say they are expecting.
So perhaps this is evidence of continuous takeoff still being quite fast.
Yes, thanks! I mostly agree with that assessment,* though as an aside I have a beef with the implication that Bostrom, Yudkowsky, etc. expect discontinuities. That beef is with Paul Christiano, not you. :)
So far the biggest update this has been for me, I think, is that it seems to have shown that it’s quite possible to get an intelligence explosion even without economic feedback loops. Like, even with a fixed compute/money budget—or even with a fixed number of scientists and fixed amount of research funding—we could get singularity. At least in principle. This is weird because in practice I am pretty sure I remember reading that the growth we’ve seen so far can be best explained via an economic feedback loop: Better technology allows for bigger population and economy which allows for more scientists and funding which allows for better technology. So I’m a bit confused, I must say—my model is giving me results I would have predicted wouldn’t happen.
*There have been a few cases where the growth didn’t look hyperbolic, but rather like a steady exponential trend that then turns into a singularity. World GDP, by contrast, has what looks like at least three exponential trends in it, such that it is more parsimonious to model it as hyperbolic growth. I think.
I should add though that I haven’t systematically examined these graphs yet, so it’s possible I’m just missing something—e.g. it occurs to me right now that maybe some of these graphs I saw were really logistic functions rather than hyperbolic or exponential-until-you-hit-limits. I should make some more and look at them more carefully.
It seems like your model doesn’t factor in legality. We have a lot more laws that add additional burocracy to technology development today then we had 50 years ago.