As a proud member of the “discussion is net-negative” side, I shall gladly attempt to establish consensus using my favorite method — dictatorial power!
It seems like one of the most useful features of having agreement separate from karma is that it lets you vote up the joke and vote down the meaning :)
Would be curious to hear more about what kinds of discussion you think are net negative—clearly some types of discussion between some people are positive.
(Oh no, I was just making a joke, that people broadly against discussion might not be up for the sorts of consensus-building methods that acylhalide proposes—i.e. more discussion.)
(I guess their comment seems more obviously self-defeating and thus amusing when it appears in Recent Discussion, divorced of the context of the post.)
I got the joke after a moment. In retrospect, I’ll admit that it was a bit funny, but my initial offense at the literal meaning crushed any humor I might have felt in the moment.
I would say that “AI risk advocacy among larger public” is probably net bad, and I’m very confused that this isn’t a much more popular option! I don’t see what useful thing the larger public is supposed to do with this information. What are we “advocating”?
Since I nonetheless think that AI risk outreach within ML is very net-positive, this poll strikes me as extraordinarily weak evidence that a lot of EAs think we shouldn’t do AI risk outreach within ML. Only 5 of the 55 respondents endorsed this for the general public, which strikes me as a way lower bar than ‘keep this secret from ML’.
You don’t need to be advocating a specific course of action. There are smart people who could be doing things to reduce AI x-risk and aren’t (yet) because they haven’t heard (enough) about the problem.
One reason you might be in favor of telling the larger public about AI risk absent a clear path to victory is that it’s the truth, and even regular people that don’t have anything to immediately contribute to the problem deserve to know if they’re gonna die in 10-25 years.
Time spent doing outreach to the general public is time not spent on other tasks. If there’s something else you could do to reduce the risk of everyone dying, I think most people would reflectively endorse you prioritizing that instead, if ‘spend your time warning us’ is either neutral or actively harmful to people’s survival odds.
I do think this is a compelling reason not to lie to people, if you need more reasons. But “don’t lie” is different from “go out of your way to choose a priority list that will increase people’s odds of dying, in order to warn them that they’re likely to die”.
You went from saying telling the general public about the problem is net negative to saying that it’s got an opportunity cost, and there are probably unspecified better things to do with your time. I don’t disagree with the latter.
As a proud member of the “discussion is net-negative” side, I shall gladly attempt to establish consensus using my favorite method — dictatorial power!
It seems like one of the most useful features of having agreement separate from karma is that it lets you vote up the joke and vote down the meaning :)
Would be curious to hear more about what kinds of discussion you think are net negative—clearly some types of discussion between some people are positive.
(Oh no, I was just making a joke, that people broadly against discussion might not be up for the sorts of consensus-building methods that acylhalide proposes—i.e. more discussion.)
(I guess their comment seems more obviously self-defeating and thus amusing when it appears in Recent Discussion, divorced of the context of the post.)
This joke also didn’t land with me.
K, I’ll try another one again in a year.
I got the joke after a moment. In retrospect, I’ll admit that it was a bit funny, but my initial offense at the literal meaning crushed any humor I might have felt in the moment.
What are examples of reasons people believe discussion is net-negative?
I would say that “AI risk advocacy among larger public” is probably net bad, and I’m very confused that this isn’t a much more popular option! I don’t see what useful thing the larger public is supposed to do with this information. What are we “advocating”?
Since I nonetheless think that AI risk outreach within ML is very net-positive, this poll strikes me as extraordinarily weak evidence that a lot of EAs think we shouldn’t do AI risk outreach within ML. Only 5 of the 55 respondents endorsed this for the general public, which strikes me as a way lower bar than ‘keep this secret from ML’.
You don’t need to be advocating a specific course of action. There are smart people who could be doing things to reduce AI x-risk and aren’t (yet) because they haven’t heard (enough) about the problem.
One reason you might be in favor of telling the larger public about AI risk absent a clear path to victory is that it’s the truth, and even regular people that don’t have anything to immediately contribute to the problem deserve to know if they’re gonna die in 10-25 years.
Time spent doing outreach to the general public is time not spent on other tasks. If there’s something else you could do to reduce the risk of everyone dying, I think most people would reflectively endorse you prioritizing that instead, if ‘spend your time warning us’ is either neutral or actively harmful to people’s survival odds.
I do think this is a compelling reason not to lie to people, if you need more reasons. But “don’t lie” is different from “go out of your way to choose a priority list that will increase people’s odds of dying, in order to warn them that they’re likely to die”.
You went from saying telling the general public about the problem is net negative to saying that it’s got an opportunity cost, and there are probably unspecified better things to do with your time. I don’t disagree with the latter.
If it were (sufficiently) net positive rather than net negative, then it would be worth the opportunity cost.