(I do also think I’ve gotten some reasonable updates from Jeff Kaufman’s discussion and arguments (link to the most recent summary he gave but I’m thinking more of various thoughts I’ve heard him give over the years. I think my notion that X-Risk efforts are hard due to poor feedback loops came from him, which I think is one of the most important concerns to address. Although my impression is he got it from somewhere else originally)
[Edit: much of what Jeff did in his “official research project” was talk to lots of AI professionals, who theoretically had more technically relevant expertise. I’m not sure how many of them fit the criteria I listed in my reply to Ted. My impression is they were some mix of “already involved in the AI Safety field”, and “didn’t really meet the criteria I think are important for critics”, although I don’t remember the details offhand.
I do observe that the best internal criticism we have comes from people who roughly buy the Superintelligence arguments and are involved with the AI Alignment field but who disagree a lot on the particulars a la Paul]
(I do also think I’ve gotten some reasonable updates from Jeff Kaufman’s discussion and arguments (link to the most recent summary he gave but I’m thinking more of various thoughts I’ve heard him give over the years. I think my notion that X-Risk efforts are hard due to poor feedback loops came from him, which I think is one of the most important concerns to address. Although my impression is he got it from somewhere else originally)
[Edit: much of what Jeff did in his “official research project” was talk to lots of AI professionals, who theoretically had more technically relevant expertise. I’m not sure how many of them fit the criteria I listed in my reply to Ted. My impression is they were some mix of “already involved in the AI Safety field”, and “didn’t really meet the criteria I think are important for critics”, although I don’t remember the details offhand.
I do observe that the best internal criticism we have comes from people who roughly buy the Superintelligence arguments and are involved with the AI Alignment field but who disagree a lot on the particulars a la Paul]