That sentence is mainly based on Dan’s experience in the ML community over the years. I think surveys do not always convey how people actually feel about a research area (or the researchers working on that area). There is also certainly a difference between the question posed by AI Impacts above and general opinions of safety/value alignment. “Does this argument point at an important problem?” is quite a different question from asking “should we be working right now on averting existential risk from AI?” If you look at the question after that in the survey, 60%+ put Russell’s problem as a low present concern.
As you note, it’s also true that the survey was in 2016. Dan started doing ML research around then, so his experience is more recent. But given the reasons above, I don’t think that’s good evidence to speculate about what would happen if the survey were repeated.
(Speaking for myself here)
That sentence is mainly based on Dan’s experience in the ML community over the years. I think surveys do not always convey how people actually feel about a research area (or the researchers working on that area). There is also certainly a difference between the question posed by AI Impacts above and general opinions of safety/value alignment. “Does this argument point at an important problem?” is quite a different question from asking “should we be working right now on averting existential risk from AI?” If you look at the question after that in the survey, 60%+ put Russell’s problem as a low present concern.
As you note, it’s also true that the survey was in 2016. Dan started doing ML research around then, so his experience is more recent. But given the reasons above, I don’t think that’s good evidence to speculate about what would happen if the survey were repeated.