It is widely believed in the EA community that AI progress is acutely harmful by substantially increasing X-risks. This has led to a growing priority on pushing against work advancing AI capabilities.[1]
On the other hand, economic growth, scientific advancements, and (non-AI) technological progress are generally viewed as highly beneficial, improving the quality of the future provided there are no existential catastrophes.[2]
But here’s the problem: contributing to this general civilizational progress that benefits humanity also substantially benefits AI researchers and their work.
My intuitive reaction here (and that of most, I assume) is maybe something like “yeah ok but surely this doesn’t balance out the benefits. We can’t tell the overwhelming majority of humans that we’re gonna slow down science, economic growth and improving their lives with these (and those of their descendants) until AI is safe just because these would also benefit a tiny minority that is making AI less safe”.
However, there has to be some threshold of harm (from AI development) beyond which we would think slowing down technological progress generally (and not only AI progress) would be worth it.
So what makes us believe that we’re not beyond this threshold?
[Question] How bad would AI progress need to be for us to think general technological progress is also bad?
It is widely believed in the EA community that AI progress is acutely harmful by substantially increasing X-risks. This has led to a growing priority on pushing against work advancing AI capabilities.[1]
On the other hand, economic growth, scientific advancements, and (non-AI) technological progress are generally viewed as highly beneficial, improving the quality of the future provided there are no existential catastrophes.[2]
But here’s the problem: contributing to this general civilizational progress that benefits humanity also substantially benefits AI researchers and their work.
My intuitive reaction here (and that of most, I assume) is maybe something like “yeah ok but surely this doesn’t balance out the benefits. We can’t tell the overwhelming majority of humans that we’re gonna slow down science, economic growth and improving their lives with these (and those of their descendants) until AI is safe just because these would also benefit a tiny minority that is making AI less safe”.
However, there has to be some threshold of harm (from AI development) beyond which we would think slowing down technological progress generally (and not only AI progress) would be worth it.
So what makes us believe that we’re not beyond this threshold?
For example, on his 80,000 hours podcast appearance, Zvi Mowshowitz claims that it is “the most destructive job per unit of effort that you could possibly have”. See also the recent growth of the Pause AI movement.
For recent research and opinions that go in that direction, see Clancy 2023; Clancy and Rodriguez 2024.