I don’t think your bolded conclusion holds. Why does there have to be such a threshold? There are reasonable world-models that have no such thing.
For example: suppose that we agreed not to research AI, and could enforce that if necessary. Then no matter how great our technological progress becomes, the risk from AI catastrophe remains at zero.
We can even suppose that increasing technological progress more generally includes a higher sanity waterline, and so makes such a coordination more likely to occur. Maybe we’re near the bottom of a curve of technology-vs-AI-risk, where we’re civilizationally smart enough to make destructive AI but not enough to coordinate to do something which is not that. That would be a case for accelerating technology that isn’t AI as risk from AI in the model increases.
A few minutes thought reveals other models where no such threshold exists.
So there is a case where there may exist such a threshold, and perhaps we are beyond it if so. I don’t see evidence that there must exist such a threshold.
Thanks, that’s fair! Such a threshold exists if and only if you assume— non-zero AI research (which is the scenario we’re interested in here I guess), - technological progress correlates with AI progress (which as you say is not guaranteed but that still seems very likely to me), - maybe a few other crucial things I implicitly assume without realizing.
I don’t think your bolded conclusion holds. Why does there have to be such a threshold? There are reasonable world-models that have no such thing.
For example: suppose that we agreed not to research AI, and could enforce that if necessary. Then no matter how great our technological progress becomes, the risk from AI catastrophe remains at zero.
We can even suppose that increasing technological progress more generally includes a higher sanity waterline, and so makes such a coordination more likely to occur. Maybe we’re near the bottom of a curve of technology-vs-AI-risk, where we’re civilizationally smart enough to make destructive AI but not enough to coordinate to do something which is not that. That would be a case for accelerating technology that isn’t AI as risk from AI in the model increases.
A few minutes thought reveals other models where no such threshold exists.
So there is a case where there may exist such a threshold, and perhaps we are beyond it if so. I don’t see evidence that there must exist such a threshold.
Thanks, that’s fair! Such a threshold exists if and only if you assume—
non-zero AI research (which is the scenario we’re interested in here I guess),
- technological progress correlates with AI progress (which as you say is not guaranteed but that still seems very likely to me),
- maybe a few other crucial things I implicitly assume without realizing.