Safety and value alignment are generally toxic words, currently. Safety is becoming more normalized due to its associations with uncertainty, adversarial robustness, and reliability, which are thought respectable. Discussions of superintelligence are often derided as “not serious”, “not grounded,” or “science fiction.”
These numbers seem to conflict with what you said but maybe I’m misinterpreting you. If there is a conflict here, do you think that if this survey was done again, the results would be different? Or do you think these responses do not provide an accurate impression of how researchers actually feel/felt (maybe because of agreement bias or something)?
That sentence is mainly based on Dan’s experience in the ML community over the years. I think surveys do not always convey how people actually feel about a research area (or the researchers working on that area). There is also certainly a difference between the question posed by AI Impacts above and general opinions of safety/value alignment. “Does this argument point at an important problem?” is quite a different question from asking “should we be working right now on averting existential risk from AI?” If you look at the question after that in the survey, 60%+ put Russell’s problem as a low present concern.
As you note, it’s also true that the survey was in 2016. Dan started doing ML research around then, so his experience is more recent. But given the reasons above, I don’t think that’s good evidence to speculate about what would happen if the survey were repeated.
Here’s a relevant question in the 2016 survey of AI researchers:
These numbers seem to conflict with what you said but maybe I’m misinterpreting you. If there is a conflict here, do you think that if this survey was done again, the results would be different? Or do you think these responses do not provide an accurate impression of how researchers actually feel/felt (maybe because of agreement bias or something)?
(Speaking for myself here)
That sentence is mainly based on Dan’s experience in the ML community over the years. I think surveys do not always convey how people actually feel about a research area (or the researchers working on that area). There is also certainly a difference between the question posed by AI Impacts above and general opinions of safety/value alignment. “Does this argument point at an important problem?” is quite a different question from asking “should we be working right now on averting existential risk from AI?” If you look at the question after that in the survey, 60%+ put Russell’s problem as a low present concern.
As you note, it’s also true that the survey was in 2016. Dan started doing ML research around then, so his experience is more recent. But given the reasons above, I don’t think that’s good evidence to speculate about what would happen if the survey were repeated.
Just spent one year in academia; my experience trying to talk to researchers about AGI match what Dan wrote about.