Re: your request for collaboration—I am skeptical of ROI of research on AI X-risk, and I would be happy to help offer insight on that perspective, either as a source or as a giver of feedback. Feel free to email me at {last name}{first name}@gmail.com
I’m not an expert in AI, but I have a PhD in semiconductors (which gives me perspective on hardware) and currently work on machine learning at Netflix (which gives me perspective on software). I also was one of the winners of the SciCast prediction market a few years back, which is evidence that my judgment of near-term tech trends is decently calibrated.
Although I’m not the one trying to run a project, there are a couple credentials I’d be looking for to evaluate in a serious critic. (I very much agree with the OP that “serious critics” are an important thing to have more of).
Not meant to be a comment one way or another on whether you fit this, just that you didn’t mention it yet:
Fluency in the arguments presented in Superintelligence (ideally, fluency in the broader spectrum of arguments relating to AI and X-Risk, but Superintelligence does a thorough enough job that it works okay as a litmus test). I stopped paying much attention to AI Safety critics because they didn’t seem knowledgeable about the basic arguments, let alone up to speed on the state of the field.
“Seriousness of trying to solve the problem.” I.e if we knew that aliens were coming in 30 years (or 10, or even 300), we wouldn’t just shrug if we couldn’t come up with tractable things to do. We’d be building some kind of model of whatever-the-best-course-of-action was. If the correct course of action is “wait until the field has progressed more”, there should be a clear sense of when you have waited long enough and what to do when things change. (Maybe this point could be summarized as “familiar with and responding to the concerns in No Fire Alarm for AI Safety.”)
The criticism is expecting counter-criticism. i.e. What I think we’re missing is critics who are in it for the long haul, who see their work as the first step of an iterative process, with an expectation that the AI safety field will respond and/or update to their critiques.
The criticism is expecting counter-criticism. i.e. What I think we’re missing is critics who are in it for the long haul, who see their work as the first step of an iterative process, with an expectation that the AI safety field will respond and/or update to their critiques.
As someone who sometimes writes things that are a bit skeptical regarding AI doom, I find the difficulty of getting counter-criticism frustrating.
Nod. That does seem like another important part of the equation. I admit I have not read that much of your work. I just went to revisit the last essay of yours I remember looking at and amusingly found this comment of mine apologizing for not having commented more. :(
The critique-author commits to writing a response post 2-4 weeks later responding to the comments, or alternatively a response post 1-2 months later responding to all posts on LW with >20 karma that critique the initial post.
This seems much more specific than I was aiming for at this stage. That’s certainly one way to operationalize it, but I’m much more worried about the binary condition of “they expect to have to update or defend the critique at some point after people have responded to it, in some fashion, at all”, than any particular operationalization.
(edit: in fact, I think it is more important that they expect to have skin in the game 2 years later than whether they respond 2 weeks later)
Re: your request for collaboration—I am skeptical of ROI of research on AI X-risk, and I would be happy to help offer insight on that perspective, either as a source or as a giver of feedback. Feel free to email me at {last name}{first name}@gmail.com
I’m not an expert in AI, but I have a PhD in semiconductors (which gives me perspective on hardware) and currently work on machine learning at Netflix (which gives me perspective on software). I also was one of the winners of the SciCast prediction market a few years back, which is evidence that my judgment of near-term tech trends is decently calibrated.
Although I’m not the one trying to run a project, there are a couple credentials I’d be looking for to evaluate in a serious critic. (I very much agree with the OP that “serious critics” are an important thing to have more of).
Not meant to be a comment one way or another on whether you fit this, just that you didn’t mention it yet:
Fluency in the arguments presented in Superintelligence (ideally, fluency in the broader spectrum of arguments relating to AI and X-Risk, but Superintelligence does a thorough enough job that it works okay as a litmus test). I stopped paying much attention to AI Safety critics because they didn’t seem knowledgeable about the basic arguments, let alone up to speed on the state of the field.
“Seriousness of trying to solve the problem.” I.e if we knew that aliens were coming in 30 years (or 10, or even 300), we wouldn’t just shrug if we couldn’t come up with tractable things to do. We’d be building some kind of model of whatever-the-best-course-of-action was. If the correct course of action is “wait until the field has progressed more”, there should be a clear sense of when you have waited long enough and what to do when things change. (Maybe this point could be summarized as “familiar with and responding to the concerns in No Fire Alarm for AI Safety.”)
The criticism is expecting counter-criticism. i.e. What I think we’re missing is critics who are in it for the long haul, who see their work as the first step of an iterative process, with an expectation that the AI safety field will respond and/or update to their critiques.
As someone who sometimes writes things that are a bit skeptical regarding AI doom, I find the difficulty of getting counter-criticism frustrating.
Nod. That does seem like another important part of the equation. I admit I have not read that much of your work. I just went to revisit the last essay of yours I remember looking at and amusingly found this comment of mine apologizing for not having commented more. :(
I made a relevant post in the Meta section.
I might slightly alter to one of
This seems much more specific than I was aiming for at this stage. That’s certainly one way to operationalize it, but I’m much more worried about the binary condition of “they expect to have to update or defend the critique at some point after people have responded to it, in some fashion, at all”, than any particular operationalization.
(edit: in fact, I think it is more important that they expect to have skin in the game 2 years later than whether they respond 2 weeks later)