Well, what I’m saying is that you’re invoking historical experience of existential risk arising from rapid growth in power, when there is no such historical experience, up until at least 1945 (or a few years earlier, for those in the know). Until then, nobody thought that there was any existential risk arising from technological progress. And they were right—unless you take the rather strange viewpoint that (say) Michael Faraday’s work increased existential risk because it was part of the lead up to risk from unfriendly AI hundreds of years in the future...
How then would you evaluate the level of existential risk at time X? Is that you would ask whether people at time X believed that there was existential risk?
I’m not saying that Michael Faraday’s work in the earlier 19th century didn’t actually contribute to existential risk, by being part of the developments ultimately enabling unfriendly AI hundreds of years after he lived. Perhaps it did. What I’m saying is that you can’t take the huge progress Michael Faraday made as evidence that rapid technological progress leads to existential risk, in order to argue that AI poses an existential risk, because the only people who believe that Michael Faraday’s work contributed to existential risk are the ones who already think that AI poses an existential risk. Your argument won’t convince anyone who isn’t already convinced.
Well, what I’m saying is that you’re invoking historical experience of existential risk arising from rapid growth in power, when there is no such historical experience, up until at least 1945 (or a few years earlier, for those in the know). Until then, nobody thought that there was any existential risk arising from technological progress. And they were right—unless you take the rather strange viewpoint that (say) Michael Faraday’s work increased existential risk because it was part of the lead up to risk from unfriendly AI hundreds of years in the future...
How then would you evaluate the level of existential risk at time X? Is that you would ask whether people at time X believed that there was existential risk?
I’m not saying that Michael Faraday’s work in the earlier 19th century didn’t actually contribute to existential risk, by being part of the developments ultimately enabling unfriendly AI hundreds of years after he lived. Perhaps it did. What I’m saying is that you can’t take the huge progress Michael Faraday made as evidence that rapid technological progress leads to existential risk, in order to argue that AI poses an existential risk, because the only people who believe that Michael Faraday’s work contributed to existential risk are the ones who already think that AI poses an existential risk. Your argument won’t convince anyone who isn’t already convinced.