Hey- Look, existential risk doesn’t arise from risky technologies alone, but from the combination of risky technologies and a dis-coordinated humanity. And existential risk increases not just when a dis-coordinated humanity develops, say, bioweapons, but also when a dis-coordinated humanity develops the precursors to bioweapons, and we can propagate that backwards.
Now the conclusion I am arguing for in the post is that developing powerful AI is likely to increase existential risk, and the evidence I am leaning on is that rapid technological development has landed us where we are now, and where we are now is that we have a great deal of power over the future of life on the planet, but we are not using that power very reliably due to our dis-coordinated state. The clearest illustration of us not using our power very reliably seems to me to be the fact that the level of existential risk is high, and most of that risk is due to humans.
Most technological developments reduce existential risk, since they provide more ways of dealing with the consequences of something like a meteor impact
Well that is definitely a benefit of technological development, but you should consider ways that most technological developments could increase existential risk before concluding that most technological developments overall reduce existential risk. Generally speaking, it really seems to me that most technological developments give humanity more power, and giving a dis-coordinated humanity more power beyond its current level seems very dangerous. A well-coordinated humanity, on the other hand, could certainly take up more power safely.
Maybe I’m missing something in your argument, but it seems rather circular to me.
You argue that rapid technological change produces existential risk, because it has in the past. But it turns out that your argument for why technological change in the past produced existential risk is that it set the stage for later advances in bioweapons, AI, or whatever, that will produce existential risk only in the future.
But you can’t argue that historical experience shows that we should be worried about rapid AI progress as an existential risk, if the historical experience is just that this past progress was a necessary lead up to progress in AI, which is an existential risk...
It’s certainly plausible that technological progress today is producing levels of power that pose existential risks. But I think it is rather strange to argue for that on the basis of historical experience, when historically technological progress did not in fact lead to existential risk at the time. Rather, you need to argue that current progress could lead to levels of power that are qualitatively different from the past.
Well all existential risk is about a possible existential catastrophe in the future, and there are zero existential catastrophes in our past, because if there were then we wouldn’t be here. Bioweapons, for example, have never yet produced an existential catastrophe, so how is it that we conclude that there is any existential risk due to bioweapons?
So when we evaluate existential risk over time, we are looking at how close humanity is flirting with danger at various times, and how dis-coordinated that flirtation is.
Well, what I’m saying is that you’re invoking historical experience of existential risk arising from rapid growth in power, when there is no such historical experience, up until at least 1945 (or a few years earlier, for those in the know). Until then, nobody thought that there was any existential risk arising from technological progress. And they were right—unless you take the rather strange viewpoint that (say) Michael Faraday’s work increased existential risk because it was part of the lead up to risk from unfriendly AI hundreds of years in the future...
How then would you evaluate the level of existential risk at time X? Is that you would ask whether people at time X believed that there was existential risk?
I’m not saying that Michael Faraday’s work in the earlier 19th century didn’t actually contribute to existential risk, by being part of the developments ultimately enabling unfriendly AI hundreds of years after he lived. Perhaps it did. What I’m saying is that you can’t take the huge progress Michael Faraday made as evidence that rapid technological progress leads to existential risk, in order to argue that AI poses an existential risk, because the only people who believe that Michael Faraday’s work contributed to existential risk are the ones who already think that AI poses an existential risk. Your argument won’t convince anyone who isn’t already convinced.
Hey- Look, existential risk doesn’t arise from risky technologies alone, but from the combination of risky technologies and a dis-coordinated humanity. And existential risk increases not just when a dis-coordinated humanity develops, say, bioweapons, but also when a dis-coordinated humanity develops the precursors to bioweapons, and we can propagate that backwards.
Now the conclusion I am arguing for in the post is that developing powerful AI is likely to increase existential risk, and the evidence I am leaning on is that rapid technological development has landed us where we are now, and where we are now is that we have a great deal of power over the future of life on the planet, but we are not using that power very reliably due to our dis-coordinated state. The clearest illustration of us not using our power very reliably seems to me to be the fact that the level of existential risk is high, and most of that risk is due to humans.
Well that is definitely a benefit of technological development, but you should consider ways that most technological developments could increase existential risk before concluding that most technological developments overall reduce existential risk. Generally speaking, it really seems to me that most technological developments give humanity more power, and giving a dis-coordinated humanity more power beyond its current level seems very dangerous. A well-coordinated humanity, on the other hand, could certainly take up more power safely.
Maybe I’m missing something in your argument, but it seems rather circular to me.
You argue that rapid technological change produces existential risk, because it has in the past. But it turns out that your argument for why technological change in the past produced existential risk is that it set the stage for later advances in bioweapons, AI, or whatever, that will produce existential risk only in the future.
But you can’t argue that historical experience shows that we should be worried about rapid AI progress as an existential risk, if the historical experience is just that this past progress was a necessary lead up to progress in AI, which is an existential risk...
It’s certainly plausible that technological progress today is producing levels of power that pose existential risks. But I think it is rather strange to argue for that on the basis of historical experience, when historically technological progress did not in fact lead to existential risk at the time. Rather, you need to argue that current progress could lead to levels of power that are qualitatively different from the past.
Well all existential risk is about a possible existential catastrophe in the future, and there are zero existential catastrophes in our past, because if there were then we wouldn’t be here. Bioweapons, for example, have never yet produced an existential catastrophe, so how is it that we conclude that there is any existential risk due to bioweapons?
So when we evaluate existential risk over time, we are looking at how close humanity is flirting with danger at various times, and how dis-coordinated that flirtation is.
Well, what I’m saying is that you’re invoking historical experience of existential risk arising from rapid growth in power, when there is no such historical experience, up until at least 1945 (or a few years earlier, for those in the know). Until then, nobody thought that there was any existential risk arising from technological progress. And they were right—unless you take the rather strange viewpoint that (say) Michael Faraday’s work increased existential risk because it was part of the lead up to risk from unfriendly AI hundreds of years in the future...
How then would you evaluate the level of existential risk at time X? Is that you would ask whether people at time X believed that there was existential risk?
I’m not saying that Michael Faraday’s work in the earlier 19th century didn’t actually contribute to existential risk, by being part of the developments ultimately enabling unfriendly AI hundreds of years after he lived. Perhaps it did. What I’m saying is that you can’t take the huge progress Michael Faraday made as evidence that rapid technological progress leads to existential risk, in order to argue that AI poses an existential risk, because the only people who believe that Michael Faraday’s work contributed to existential risk are the ones who already think that AI poses an existential risk. Your argument won’t convince anyone who isn’t already convinced.