Ops Generalist @ Anthropic.
Much more active on the EA Forum: https://forum.effectivealtruism.org/users/miranda-zhang
Ops Generalist @ Anthropic.
Much more active on the EA Forum: https://forum.effectivealtruism.org/users/miranda-zhang
I agree that AI safety can be successfully pitched to a wider range of audiences even without mentioning superintelligence, though I’m not sure this will get people to “holy shit, x-risk.” However, I do think that appealing to the more near-term concerns that people have could be sufficiently concerning to policymakers and other important stakeholders, and possibly speed up their willingness to implement useful policy.
Of course, this assumes that useful policy for near-term concerns will also be useful policy for AI x-risk. It seems plausible to me that the most effective policies for the latter look quite different from policies that clearly overlap with both, but still seems directionally good!
This was interesting and I would like to see more AI research organizations conducting + publishing similar surveys.