Maybe I’m missing something in your argument, but it seems rather circular to me.
You argue that rapid technological change produces existential risk, because it has in the past. But it turns out that your argument for why technological change in the past produced existential risk is that it set the stage for later advances in bioweapons, AI, or whatever, that will produce existential risk only in the future.
But you can’t argue that historical experience shows that we should be worried about rapid AI progress as an existential risk, if the historical experience is just that this past progress was a necessary lead up to progress in AI, which is an existential risk...
It’s certainly plausible that technological progress today is producing levels of power that pose existential risks. But I think it is rather strange to argue for that on the basis of historical experience, when historically technological progress did not in fact lead to existential risk at the time. Rather, you need to argue that current progress could lead to levels of power that are qualitatively different from the past.
Well all existential risk is about a possible existential catastrophe in the future, and there are zero existential catastrophes in our past, because if there were then we wouldn’t be here. Bioweapons, for example, have never yet produced an existential catastrophe, so how is it that we conclude that there is any existential risk due to bioweapons?
So when we evaluate existential risk over time, we are looking at how close humanity is flirting with danger at various times, and how dis-coordinated that flirtation is.
Well, what I’m saying is that you’re invoking historical experience of existential risk arising from rapid growth in power, when there is no such historical experience, up until at least 1945 (or a few years earlier, for those in the know). Until then, nobody thought that there was any existential risk arising from technological progress. And they were right—unless you take the rather strange viewpoint that (say) Michael Faraday’s work increased existential risk because it was part of the lead up to risk from unfriendly AI hundreds of years in the future...
How then would you evaluate the level of existential risk at time X? Is that you would ask whether people at time X believed that there was existential risk?
I’m not saying that Michael Faraday’s work in the earlier 19th century didn’t actually contribute to existential risk, by being part of the developments ultimately enabling unfriendly AI hundreds of years after he lived. Perhaps it did. What I’m saying is that you can’t take the huge progress Michael Faraday made as evidence that rapid technological progress leads to existential risk, in order to argue that AI poses an existential risk, because the only people who believe that Michael Faraday’s work contributed to existential risk are the ones who already think that AI poses an existential risk. Your argument won’t convince anyone who isn’t already convinced.
Maybe I’m missing something in your argument, but it seems rather circular to me.
You argue that rapid technological change produces existential risk, because it has in the past. But it turns out that your argument for why technological change in the past produced existential risk is that it set the stage for later advances in bioweapons, AI, or whatever, that will produce existential risk only in the future.
But you can’t argue that historical experience shows that we should be worried about rapid AI progress as an existential risk, if the historical experience is just that this past progress was a necessary lead up to progress in AI, which is an existential risk...
It’s certainly plausible that technological progress today is producing levels of power that pose existential risks. But I think it is rather strange to argue for that on the basis of historical experience, when historically technological progress did not in fact lead to existential risk at the time. Rather, you need to argue that current progress could lead to levels of power that are qualitatively different from the past.
Well all existential risk is about a possible existential catastrophe in the future, and there are zero existential catastrophes in our past, because if there were then we wouldn’t be here. Bioweapons, for example, have never yet produced an existential catastrophe, so how is it that we conclude that there is any existential risk due to bioweapons?
So when we evaluate existential risk over time, we are looking at how close humanity is flirting with danger at various times, and how dis-coordinated that flirtation is.
Well, what I’m saying is that you’re invoking historical experience of existential risk arising from rapid growth in power, when there is no such historical experience, up until at least 1945 (or a few years earlier, for those in the know). Until then, nobody thought that there was any existential risk arising from technological progress. And they were right—unless you take the rather strange viewpoint that (say) Michael Faraday’s work increased existential risk because it was part of the lead up to risk from unfriendly AI hundreds of years in the future...
How then would you evaluate the level of existential risk at time X? Is that you would ask whether people at time X believed that there was existential risk?
I’m not saying that Michael Faraday’s work in the earlier 19th century didn’t actually contribute to existential risk, by being part of the developments ultimately enabling unfriendly AI hundreds of years after he lived. Perhaps it did. What I’m saying is that you can’t take the huge progress Michael Faraday made as evidence that rapid technological progress leads to existential risk, in order to argue that AI poses an existential risk, because the only people who believe that Michael Faraday’s work contributed to existential risk are the ones who already think that AI poses an existential risk. Your argument won’t convince anyone who isn’t already convinced.