RE Sarah: Longer timelines don’t change the picture that much, in my mind. I don’t find this article to be addressing the core concerns. Can you recommend one that’s more focused on “why AI-Xrisk isn’t the most important thing in the world”?
RE Robin Hanson: I don’t really know much of what he thinks, but IIRC his “urgency of AI depends on FOOM” was not compelling.
What I’ve noticed is that critics are often working from very different starting points, e.g. being unwilling to estimate probabilities of future events, using common-sense rather than consequentialist ethics, etc.
Can you recommend one that’s more focused on “why AI-Xrisk isn’t the most important thing in the world”?
No, for the reasons I mention elsethread. I basically don’t think there are good “real*” critics.
I agree with the general sense that “Sarah Constantin Timelines look more like 40 years than 200, and thus don’t really change the overall urgency of the problem unless you’re not altruistic and are already old enough that you don’t expect to be alive by the same superintelligence exists.”
RE Sarah: Longer timelines don’t change the picture that much, in my mind. I don’t find this article to be addressing the core concerns. Can you recommend one that’s more focused on “why AI-Xrisk isn’t the most important thing in the world”?
RE Robin Hanson: I don’t really know much of what he thinks, but IIRC his “urgency of AI depends on FOOM” was not compelling.
What I’ve noticed is that critics are often working from very different starting points, e.g. being unwilling to estimate probabilities of future events, using common-sense rather than consequentialist ethics, etc.
No, for the reasons I mention elsethread. I basically don’t think there are good “real*” critics.
I agree with the general sense that “Sarah Constantin Timelines look more like 40 years than 200, and thus don’t really change the overall urgency of the problem unless you’re not altruistic and are already old enough that you don’t expect to be alive by the same superintelligence exists.”