I’m skeptical of the claim that the only things that matter are the ones that have to be done before AGI.
Ways it could be true:
The rate of productivity growth has a massive step increase after AI can improve its capabilities without the overhead of collaborating with humans. Generally the faster the rate of productivity growth, the less valuable it is to do long-horizon work. For example, people shouldn’t work on climate change because AGI will instantly invent better renewables.
If we expect short timelines and also smooth takeoff, then that might mean our current rate of productivity growth is much higher or a different shape (e.g. doubly exponential instead of just exponential) than it was a few years ago. This much higher rate of productivity growth means any work with 3+ year horizons has negligible value.
Ways it could be false:
Moloch still rules the world after AGI (maybe there are multiple competing AGIs). For example, a scheme that allows an aligned AGI to propagate it’s alignment to the next generation would be valuable to work on today because it might be difficult for our first generation aligned AGI to invent this before someone else (another AGI) creates the second generation, smarter AGI.
DALYs saved today are still valuable. Q: Why save lives now when it will be so much cheaper after we build aligned AGI? A: Why do computer scientists learn to write fast algorithms when they could just wait for compute speed to double?
Basic research might always be valuable because it’s often not possible to see the applications of a research field until it’s quite mature. A post-AGI world might dedicate some constant fraction of resources towards basic research with no obvious applications, and in that world it’s still valuable to pull ahead the curve of basic research accomplishments.
I lean towards disagreeing because I give credence to smooth takeoffs, mundane rates of productivity growth, and many-AGI worlds. I’m curious if those are the big cruxes or if my model could be improved.
I’m skeptical of the claim that the only things that matter are the ones that have to be done before AGI.
Ways it could be true:
The rate of productivity growth has a massive step increase after AI can improve its capabilities without the overhead of collaborating with humans. Generally the faster the rate of productivity growth, the less valuable it is to do long-horizon work. For example, people shouldn’t work on climate change because AGI will instantly invent better renewables.
If we expect short timelines and also smooth takeoff, then that might mean our current rate of productivity growth is much higher or a different shape (e.g. doubly exponential instead of just exponential) than it was a few years ago. This much higher rate of productivity growth means any work with 3+ year horizons has negligible value.
Ways it could be false:
Moloch still rules the world after AGI (maybe there are multiple competing AGIs). For example, a scheme that allows an aligned AGI to propagate it’s alignment to the next generation would be valuable to work on today because it might be difficult for our first generation aligned AGI to invent this before someone else (another AGI) creates the second generation, smarter AGI.
DALYs saved today are still valuable.
Q: Why save lives now when it will be so much cheaper after we build aligned AGI?
A: Why do computer scientists learn to write fast algorithms when they could just wait for compute speed to double?
Basic research might always be valuable because it’s often not possible to see the applications of a research field until it’s quite mature. A post-AGI world might dedicate some constant fraction of resources towards basic research with no obvious applications, and in that world it’s still valuable to pull ahead the curve of basic research accomplishments.
I lean towards disagreeing because I give credence to smooth takeoffs, mundane rates of productivity growth, and many-AGI worlds. I’m curious if those are the big cruxes or if my model could be improved.