I would say that “AI risk advocacy among larger public” is probably net bad, and I’m very confused that this isn’t a much more popular option! I don’t see what useful thing the larger public is supposed to do with this information. What are we “advocating”?
Since I nonetheless think that AI risk outreach within ML is very net-positive, this poll strikes me as extraordinarily weak evidence that a lot of EAs think we shouldn’t do AI risk outreach within ML. Only 5 of the 55 respondents endorsed this for the general public, which strikes me as a way lower bar than ‘keep this secret from ML’.
You don’t need to be advocating a specific course of action. There are smart people who could be doing things to reduce AI x-risk and aren’t (yet) because they haven’t heard (enough) about the problem.
One reason you might be in favor of telling the larger public about AI risk absent a clear path to victory is that it’s the truth, and even regular people that don’t have anything to immediately contribute to the problem deserve to know if they’re gonna die in 10-25 years.
Time spent doing outreach to the general public is time not spent on other tasks. If there’s something else you could do to reduce the risk of everyone dying, I think most people would reflectively endorse you prioritizing that instead, if ‘spend your time warning us’ is either neutral or actively harmful to people’s survival odds.
I do think this is a compelling reason not to lie to people, if you need more reasons. But “don’t lie” is different from “go out of your way to choose a priority list that will increase people’s odds of dying, in order to warn them that they’re likely to die”.
You went from saying telling the general public about the problem is net negative to saying that it’s got an opportunity cost, and there are probably unspecified better things to do with your time. I don’t disagree with the latter.
I would say that “AI risk advocacy among larger public” is probably net bad, and I’m very confused that this isn’t a much more popular option! I don’t see what useful thing the larger public is supposed to do with this information. What are we “advocating”?
Since I nonetheless think that AI risk outreach within ML is very net-positive, this poll strikes me as extraordinarily weak evidence that a lot of EAs think we shouldn’t do AI risk outreach within ML. Only 5 of the 55 respondents endorsed this for the general public, which strikes me as a way lower bar than ‘keep this secret from ML’.
You don’t need to be advocating a specific course of action. There are smart people who could be doing things to reduce AI x-risk and aren’t (yet) because they haven’t heard (enough) about the problem.
One reason you might be in favor of telling the larger public about AI risk absent a clear path to victory is that it’s the truth, and even regular people that don’t have anything to immediately contribute to the problem deserve to know if they’re gonna die in 10-25 years.
Time spent doing outreach to the general public is time not spent on other tasks. If there’s something else you could do to reduce the risk of everyone dying, I think most people would reflectively endorse you prioritizing that instead, if ‘spend your time warning us’ is either neutral or actively harmful to people’s survival odds.
I do think this is a compelling reason not to lie to people, if you need more reasons. But “don’t lie” is different from “go out of your way to choose a priority list that will increase people’s odds of dying, in order to warn them that they’re likely to die”.
You went from saying telling the general public about the problem is net negative to saying that it’s got an opportunity cost, and there are probably unspecified better things to do with your time. I don’t disagree with the latter.
If it were (sufficiently) net positive rather than net negative, then it would be worth the opportunity cost.