These are interesting suggestions, but they don’t exactly address the problem I was getting at: leaving a line of retreat for the typical AI researcher who comes to believe that his work likely contributes to harm.
My anecdotal impression is that the number of younger researchers who take arguments for AI risk seriously has grown substantially in the last years, but—apart from spreading the arguments and the option of career change—it is not clear how this knowledge should affect their actions.
If the risk of indifferent AI is to be averted, I expect that a gradual shift in what is considered important work is necessary in the minds of the AI community. The most viable path I see towards such a shift involves giving individual researchers an option to express their change in beliefs in their work—in a way that makes use of their existing skillset and doesn’t kill their careers.
Ok, I had completely missed what you were getting at, and instead interpreted your comment as saying that there’s not much point in coming up with better arguments, since we can’t expect AI researchers to change their behaviors anyway.
The most viable path I see towards such a shift involves giving individual researchers an option to express their change in beliefs in their work—in a way that makes use of their existing skillset and doesn’t kill their careers.
This seems like a hard problem, but certainly worth thinking about.
These are interesting suggestions, but they don’t exactly address the problem I was getting at: leaving a line of retreat for the typical AI researcher who comes to believe that his work likely contributes to harm.
My anecdotal impression is that the number of younger researchers who take arguments for AI risk seriously has grown substantially in the last years, but—apart from spreading the arguments and the option of career change—it is not clear how this knowledge should affect their actions.
If the risk of indifferent AI is to be averted, I expect that a gradual shift in what is considered important work is necessary in the minds of the AI community. The most viable path I see towards such a shift involves giving individual researchers an option to express their change in beliefs in their work—in a way that makes use of their existing skillset and doesn’t kill their careers.
Ok, I had completely missed what you were getting at, and instead interpreted your comment as saying that there’s not much point in coming up with better arguments, since we can’t expect AI researchers to change their behaviors anyway.
This seems like a hard problem, but certainly worth thinking about.