I agree that AI safety can be successfully pitched to a wider range of audiences even without mentioning superintelligence, though I’m not sure this will get people to “holy shit, x-risk.” However, I do think that appealing to the more near-term concerns that people have could be sufficiently concerning to policymakers and other important stakeholders, and possibly speed up their willingness to implement useful policy.
Of course, this assumes that useful policy for near-term concerns will also be useful policy for AI x-risk. It seems plausible to me that the most effective policies for the latter look quite different from policies that clearly overlap with both, but still seems directionally good!
Thanks for that link! I agree that there is a danger this pitch doesn’t get people all the way to X-risk. I think that risk might be worth it, especially if EA notices popular support failing to grow fast enough—i.e., beyond people with obviously related background and interests. Gathering more popular support for taking small AI-related dangers seriously might move the bigger x-risk problems into the Overton window, whereas right now I think they are very much not. Actually I just realized that this is a great summary of my entire idea, basically, “move the Overton window with softballs before you try to pitch people the fastball.”
But also as you said, that approach does model the problem as a war of attrition. If we really are metaphorically moments from the final battle, hail-mary attempts to recruit powerful allies is the right strategy. The problem is that these two strategies are pretty mutually exclusive. You can’t be labeled as both a thoughtful, practical policy group with good ideas and also pull the fire alarms. Maybe the solution is to have two organizations pursuing different strategies, with enough distance between them that the alarmists don’t tarnish the reputation of the moderates.
I agree that AI safety can be successfully pitched to a wider range of audiences even without mentioning superintelligence, though I’m not sure this will get people to “holy shit, x-risk.” However, I do think that appealing to the more near-term concerns that people have could be sufficiently concerning to policymakers and other important stakeholders, and possibly speed up their willingness to implement useful policy.
Of course, this assumes that useful policy for near-term concerns will also be useful policy for AI x-risk. It seems plausible to me that the most effective policies for the latter look quite different from policies that clearly overlap with both, but still seems directionally good!
Thanks for that link! I agree that there is a danger this pitch doesn’t get people all the way to X-risk. I think that risk might be worth it, especially if EA notices popular support failing to grow fast enough—i.e., beyond people with obviously related background and interests. Gathering more popular support for taking small AI-related dangers seriously might move the bigger x-risk problems into the Overton window, whereas right now I think they are very much not. Actually I just realized that this is a great summary of my entire idea, basically, “move the Overton window with softballs before you try to pitch people the fastball.”
But also as you said, that approach does model the problem as a war of attrition. If we really are metaphorically moments from the final battle, hail-mary attempts to recruit powerful allies is the right strategy. The problem is that these two strategies are pretty mutually exclusive. You can’t be labeled as both a thoughtful, practical policy group with good ideas and also pull the fire alarms. Maybe the solution is to have two organizations pursuing different strategies, with enough distance between them that the alarmists don’t tarnish the reputation of the moderates.