I suspect the model is making a hidden assumption about the lack of “special projects”; e.g. the model assumes there can’t be a single project that yields a bonus that makes all the other projects’ tasks instantly solvable?
Also, I’m not sure that the model allows us to distinguish between scenarios in which a major part of overall progress is very local (e.g. happens within a single company) and more Hansonian scenarios in which the contribution to progress is well distributed among many actors.
Yeah, I tried to build the model with certain criticisms of the intelligence explosion argument in mind—for example, the criticism that it assumes intelligence is a single thing rather than a diverse collection of skills, or the criticism that it assumes AGI will be a single thing rather than a diverse collection of more specific AI tools, or the criticism that it assumes takeoff will happen after human level but not before. My model makes no such assumptions, but it still gets intelligence explosion. I think this is an already somewhat interesting result, though not a major update for me since I didn’t put much credence in those objections anyway.
Currently the model just models civilization’s progress overall, so yeah it can’t distinguish between local vs. distributed takeoff. I’m hoping to change that in the future, but I’m not sure how yet.
Eyeballing the graphs you produced, it looks like the singularities you keep getting are hyperbolic growth, which we already have in real life (compare log(world GDP) to your graph of log(projects completed) - their shapes are almost identical).
So far as I can tell, what you’ve shown is that you almost always get a big speedup of hyperbolic growth as AI advances but without discontinuities, which is what the ‘continuous takeoff’ people like Christiano already say they are expecting.
AI is just another, faster step in the hyperbolic growth we are currently experiencing, which corresponds to a further increase in rate but not a discontinuity (or even a discontinuity in rate).
So perhaps this is evidence of continuous takeoff still being quite fast.
Yes, thanks! I mostly agree with that assessment,* though as an aside I have a beef with the implication that Bostrom, Yudkowsky, etc. expect discontinuities. That beef is with Paul Christiano, not you. :)
So far the biggest update this has been for me, I think, is that it seems to have shown that it’s quite possible to get an intelligence explosion even without economic feedback loops. Like, even with a fixed compute/money budget—or even with a fixed number of scientists and fixed amount of research funding—we could get singularity. At least in principle. This is weird because in practice I am pretty sure I remember reading that the growth we’ve seen so far can be best explained via an economic feedback loop: Better technology allows for bigger population and economy which allows for more scientists and funding which allows for better technology. So I’m a bit confused, I must say—my model is giving me results I would have predicted wouldn’t happen.
*There have been a few cases where the growth didn’t look hyperbolic, but rather like a steady exponential trend that then turns into a singularity. World GDP, by contrast, has what looks like at least three exponential trends in it, such that it is more parsimonious to model it as hyperbolic growth. I think.
I should add though that I haven’t systematically examined these graphs yet, so it’s possible I’m just missing something—e.g. it occurs to me right now that maybe some of these graphs I saw were really logistic functions rather than hyperbolic or exponential-until-you-hit-limits. I should make some more and look at them more carefully.
Very interesting :)
I suspect the model is making a hidden assumption about the lack of “special projects”; e.g. the model assumes there can’t be a single project that yields a bonus that makes all the other projects’ tasks instantly solvable?
Also, I’m not sure that the model allows us to distinguish between scenarios in which a major part of overall progress is very local (e.g. happens within a single company) and more Hansonian scenarios in which the contribution to progress is well distributed among many actors.
Yeah, I tried to build the model with certain criticisms of the intelligence explosion argument in mind—for example, the criticism that it assumes intelligence is a single thing rather than a diverse collection of skills, or the criticism that it assumes AGI will be a single thing rather than a diverse collection of more specific AI tools, or the criticism that it assumes takeoff will happen after human level but not before. My model makes no such assumptions, but it still gets intelligence explosion. I think this is an already somewhat interesting result, though not a major update for me since I didn’t put much credence in those objections anyway.
Currently the model just models civilization’s progress overall, so yeah it can’t distinguish between local vs. distributed takeoff. I’m hoping to change that in the future, but I’m not sure how yet.
Eyeballing the graphs you produced, it looks like the singularities you keep getting are hyperbolic growth, which we already have in real life (compare log(world GDP) to your graph of log(projects completed) - their shapes are almost identical).
So far as I can tell, what you’ve shown is that you almost always get a big speedup of hyperbolic growth as AI advances but without discontinuities, which is what the ‘continuous takeoff’ people like Christiano already say they are expecting.
So perhaps this is evidence of continuous takeoff still being quite fast.
Yes, thanks! I mostly agree with that assessment,* though as an aside I have a beef with the implication that Bostrom, Yudkowsky, etc. expect discontinuities. That beef is with Paul Christiano, not you. :)
So far the biggest update this has been for me, I think, is that it seems to have shown that it’s quite possible to get an intelligence explosion even without economic feedback loops. Like, even with a fixed compute/money budget—or even with a fixed number of scientists and fixed amount of research funding—we could get singularity. At least in principle. This is weird because in practice I am pretty sure I remember reading that the growth we’ve seen so far can be best explained via an economic feedback loop: Better technology allows for bigger population and economy which allows for more scientists and funding which allows for better technology. So I’m a bit confused, I must say—my model is giving me results I would have predicted wouldn’t happen.
*There have been a few cases where the growth didn’t look hyperbolic, but rather like a steady exponential trend that then turns into a singularity. World GDP, by contrast, has what looks like at least three exponential trends in it, such that it is more parsimonious to model it as hyperbolic growth. I think.
I should add though that I haven’t systematically examined these graphs yet, so it’s possible I’m just missing something—e.g. it occurs to me right now that maybe some of these graphs I saw were really logistic functions rather than hyperbolic or exponential-until-you-hit-limits. I should make some more and look at them more carefully.