In a similar vein to this, I found several resources that make me think it should be higher than 1% currently and in the next 1.5 years:
This 2012⁄3 paper by Vincent Müller and Nick Bostrom surveyed AI experts, in particular, 72 people who attended AGI workshops (most of whom do technical work). Of these 72, 36% thought that assuming HLMI would at some point exist, it would be either ‘on balance bad’ or ‘extremely bad’ for humanity. Obviously this isn’t an indication that they understand or agree with safety concerns, but directionally suggests people are concerned and thinking about this.
This 2017 paper by Seth Baum identified 45 projects on AGI and their stance on safety (page 25). Of these, 12 were active on safety (dedicated efforts to address AGI safety issues), 3 were moderate (acknowledge safety issues, but don’t have dedicated efforts to address them), and 2 were dismissive (argue that AGI safety concerns are incorrect). The remaining 28 did not specify their stance.
This is relevant, but I tend to think this sort of evidence isn’t really getting at what I want. My main reaction is one that you already said:
Obviously this isn’t an indication that they understand or agree with safety concerns, but directionally suggests people are concerned and thinking about this.
I think many people have a general prior of “we should be careful with wildly important technologies”, and so will say things like “safety is important” and “AGI might be bad”, without having much of an understanding of why.
Also, I don’t expect the specific populations surveyed in those two sources to overlap much with “top AI researchers” as defined in the question, though I have low confidence in that claim.
In a similar vein to this, I found several resources that make me think it should be higher than 1% currently and in the next 1.5 years:
This 2012⁄3 paper by Vincent Müller and Nick Bostrom surveyed AI experts, in particular, 72 people who attended AGI workshops (most of whom do technical work). Of these 72, 36% thought that assuming HLMI would at some point exist, it would be either ‘on balance bad’ or ‘extremely bad’ for humanity. Obviously this isn’t an indication that they understand or agree with safety concerns, but directionally suggests people are concerned and thinking about this.
This 2017 paper by Seth Baum identified 45 projects on AGI and their stance on safety (page 25). Of these, 12 were active on safety (dedicated efforts to address AGI safety issues), 3 were moderate (acknowledge safety issues, but don’t have dedicated efforts to address them), and 2 were dismissive (argue that AGI safety concerns are incorrect). The remaining 28 did not specify their stance.
This is relevant, but I tend to think this sort of evidence isn’t really getting at what I want. My main reaction is one that you already said:
I think many people have a general prior of “we should be careful with wildly important technologies”, and so will say things like “safety is important” and “AGI might be bad”, without having much of an understanding of why.
Also, I don’t expect the specific populations surveyed in those two sources to overlap much with “top AI researchers” as defined in the question, though I have low confidence in that claim.