FWIW I’ve found Sarah Constantin to be best “serious critic”, with Performance Trends in AI being the single post that’s updated me most in the direction of “I should perhaps be less worried or thinking in terms of longer timelines than I currently am.”
Eliezer periodically points to Robin Hanson but the disagreements in frame seem weirdly large to the point that I don’t know how to draw much actionable stuff from it (i.e. AFAICT Robin would be quite happy with outcomes that seem quite horrifying to me, and/or is uninterested in doing much about it)
RE Sarah: Longer timelines don’t change the picture that much, in my mind. I don’t find this article to be addressing the core concerns. Can you recommend one that’s more focused on “why AI-Xrisk isn’t the most important thing in the world”?
RE Robin Hanson: I don’t really know much of what he thinks, but IIRC his “urgency of AI depends on FOOM” was not compelling.
What I’ve noticed is that critics are often working from very different starting points, e.g. being unwilling to estimate probabilities of future events, using common-sense rather than consequentialist ethics, etc.
Can you recommend one that’s more focused on “why AI-Xrisk isn’t the most important thing in the world”?
No, for the reasons I mention elsethread. I basically don’t think there are good “real*” critics.
I agree with the general sense that “Sarah Constantin Timelines look more like 40 years than 200, and thus don’t really change the overall urgency of the problem unless you’re not altruistic and are already old enough that you don’t expect to be alive by the same superintelligence exists.”
(I do also think I’ve gotten some reasonable updates from Jeff Kaufman’s discussion and arguments (link to the most recent summary he gave but I’m thinking more of various thoughts I’ve heard him give over the years. I think my notion that X-Risk efforts are hard due to poor feedback loops came from him, which I think is one of the most important concerns to address. Although my impression is he got it from somewhere else originally)
[Edit: much of what Jeff did in his “official research project” was talk to lots of AI professionals, who theoretically had more technically relevant expertise. I’m not sure how many of them fit the criteria I listed in my reply to Ted. My impression is they were some mix of “already involved in the AI Safety field”, and “didn’t really meet the criteria I think are important for critics”, although I don’t remember the details offhand.
I do observe that the best internal criticism we have comes from people who roughly buy the Superintelligence arguments and are involved with the AI Alignment field but who disagree a lot on the particulars a la Paul]
FWIW I’ve found Sarah Constantin to be best “serious critic”, with Performance Trends in AI being the single post that’s updated me most in the direction of “I should perhaps be less worried or thinking in terms of longer timelines than I currently am.”
Eliezer periodically points to Robin Hanson but the disagreements in frame seem weirdly large to the point that I don’t know how to draw much actionable stuff from it (i.e. AFAICT Robin would be quite happy with outcomes that seem quite horrifying to me, and/or is uninterested in doing much about it)
RE Sarah: Longer timelines don’t change the picture that much, in my mind. I don’t find this article to be addressing the core concerns. Can you recommend one that’s more focused on “why AI-Xrisk isn’t the most important thing in the world”?
RE Robin Hanson: I don’t really know much of what he thinks, but IIRC his “urgency of AI depends on FOOM” was not compelling.
What I’ve noticed is that critics are often working from very different starting points, e.g. being unwilling to estimate probabilities of future events, using common-sense rather than consequentialist ethics, etc.
No, for the reasons I mention elsethread. I basically don’t think there are good “real*” critics.
I agree with the general sense that “Sarah Constantin Timelines look more like 40 years than 200, and thus don’t really change the overall urgency of the problem unless you’re not altruistic and are already old enough that you don’t expect to be alive by the same superintelligence exists.”
(I do also think I’ve gotten some reasonable updates from Jeff Kaufman’s discussion and arguments (link to the most recent summary he gave but I’m thinking more of various thoughts I’ve heard him give over the years. I think my notion that X-Risk efforts are hard due to poor feedback loops came from him, which I think is one of the most important concerns to address. Although my impression is he got it from somewhere else originally)
[Edit: much of what Jeff did in his “official research project” was talk to lots of AI professionals, who theoretically had more technically relevant expertise. I’m not sure how many of them fit the criteria I listed in my reply to Ted. My impression is they were some mix of “already involved in the AI Safety field”, and “didn’t really meet the criteria I think are important for critics”, although I don’t remember the details offhand.
I do observe that the best internal criticism we have comes from people who roughly buy the Superintelligence arguments and are involved with the AI Alignment field but who disagree a lot on the particulars a la Paul]