Well all existential risk is about a possible existential catastrophe in the future, and there are zero existential catastrophes in our past, because if there were then we wouldn’t be here. Bioweapons, for example, have never yet produced an existential catastrophe, so how is it that we conclude that there is any existential risk due to bioweapons?
So when we evaluate existential risk over time, we are looking at how close humanity is flirting with danger at various times, and how dis-coordinated that flirtation is.
Well, what I’m saying is that you’re invoking historical experience of existential risk arising from rapid growth in power, when there is no such historical experience, up until at least 1945 (or a few years earlier, for those in the know). Until then, nobody thought that there was any existential risk arising from technological progress. And they were right—unless you take the rather strange viewpoint that (say) Michael Faraday’s work increased existential risk because it was part of the lead up to risk from unfriendly AI hundreds of years in the future...
How then would you evaluate the level of existential risk at time X? Is that you would ask whether people at time X believed that there was existential risk?
I’m not saying that Michael Faraday’s work in the earlier 19th century didn’t actually contribute to existential risk, by being part of the developments ultimately enabling unfriendly AI hundreds of years after he lived. Perhaps it did. What I’m saying is that you can’t take the huge progress Michael Faraday made as evidence that rapid technological progress leads to existential risk, in order to argue that AI poses an existential risk, because the only people who believe that Michael Faraday’s work contributed to existential risk are the ones who already think that AI poses an existential risk. Your argument won’t convince anyone who isn’t already convinced.
Well all existential risk is about a possible existential catastrophe in the future, and there are zero existential catastrophes in our past, because if there were then we wouldn’t be here. Bioweapons, for example, have never yet produced an existential catastrophe, so how is it that we conclude that there is any existential risk due to bioweapons?
So when we evaluate existential risk over time, we are looking at how close humanity is flirting with danger at various times, and how dis-coordinated that flirtation is.
Well, what I’m saying is that you’re invoking historical experience of existential risk arising from rapid growth in power, when there is no such historical experience, up until at least 1945 (or a few years earlier, for those in the know). Until then, nobody thought that there was any existential risk arising from technological progress. And they were right—unless you take the rather strange viewpoint that (say) Michael Faraday’s work increased existential risk because it was part of the lead up to risk from unfriendly AI hundreds of years in the future...
How then would you evaluate the level of existential risk at time X? Is that you would ask whether people at time X believed that there was existential risk?
I’m not saying that Michael Faraday’s work in the earlier 19th century didn’t actually contribute to existential risk, by being part of the developments ultimately enabling unfriendly AI hundreds of years after he lived. Perhaps it did. What I’m saying is that you can’t take the huge progress Michael Faraday made as evidence that rapid technological progress leads to existential risk, in order to argue that AI poses an existential risk, because the only people who believe that Michael Faraday’s work contributed to existential risk are the ones who already think that AI poses an existential risk. Your argument won’t convince anyone who isn’t already convinced.