There was an interesting discussion on Twitter the other day about how many AI researchers were inspired to work on AGI by AI safety arguments. Apparently they bought the “AGI is important and possible” part of the argument but not the “alignment is crazy difficult” part.
I do think the AI safety community has some unfortunate echo chamber qualities which end up filtering those people out of the discussion. This seems bad because (1) the arguments for caution might be stronger if they were developed by talking to the smartest skeptics and (2) it may be that alignment isn’t crazy difficult and the people filtered out have good ideas for tackling it.
If I had extra money, I might sponsor a prize for a “why we don’t need to worry about AI safety” essay contest to try & create an incentive to bridge the tribal gap. Could accomplish one or more of the following:
Create more cross talk between people working in AGI and people thinking about how to make it safe
Show that the best arguments for not needing to worry, as discovered by this essay contest, aren’t very good
Get more mainstream AI people thinking about safety (and potentially realizing over the course of writing their essay that it needs to be prioritized)
Get fresh sets of eyes on AI safety problems in a way that could generate new insights
Another point here is that from a cause prioritization perspective, there’s a group of people incentivized to argue that AI safety is important (anyone who gets paid to work on AI safety), but there’s not really any group of people with much of an incentive to argue the reverse (that I can think of at least, let me know if you disagree). So we should expect the set of arguments which have been published to be imbalanced. A contest could help address that.
Another point here is that from a cause prioritization perspective, there’s a group of people incentivized to argue that AI safety is important (anyone who gets paid to work on AI safety), but there’s not really any group of people with much of an incentive to argue the reverse (that I can think of at least, let me know if you disagree).
What? What about all the people who prefer to do fun research that builds capabilities and has direct ways to make them rich, without having to consider the hypothesis that maybe they are causing harm? The incentives in the other direction easily seem 10x stronger to me.
Lobbying for people to ignore the harm that your industry is causing is standard in basically any industry, and we have a massive plethora of evidence of organizations putting lots of optimization power into arguing for why their work is going to have no downsides. See the energy industry, tobacco industry, dairy industry, farmers in general, technological incumbents, the medical industry, the construction industry, the meat-production and meat-packaging industries, and really any big industry I can think of. Downplaying risks of your technology is just standard practice for any mature industry out there.
What? What about all the people who prefer to do fun research that builds capabilities and has direct ways to make them rich, without having to consider the hypothesis that maybe they are causing harm?
If they’re not considering that hypothesis, that means they’re not trying to think of arguments against it. Do we disagree?
I agree if the government was seriously considering regulation of AI, the AI industry would probably lobby against this. But that’s not the same question. From a PR perspective, just ignoring critics often seems to be a good strategy.
Yes, I didn’t say “they are not considering that hypothesis”, I am saying “they don’t want to consider that hypothesis”. Those do indeed imply very different actions. I think one gives very naturally rise to producing counterarguments, the other one does not.
I am not really sure what you mean by the second paragraph. AI is being actively regulated, and there are very active lobbying efforts on behalf of the big technology companies, producing large volumes of arguments for why AI is nothing you have to worry about.
Yes, I didn’t say “they are not considering that hypothesis”, I am saying “they don’t want to consider that hypothesis”. Those do indeed imply very different actions. I think one gives very naturally rise to producing counterarguments, the other one does not.
They don’t want to consider the hypothesis, and that’s why they’ll spend a bunch of time carefully considering it and trying to figure out why it is flawed?
In any case… Assuming the Twitter discussion is accurate, some people working on AGI have already thought about the “alignment is hard” position (since those expositions are how they came to work on AGI). But they don’t think the “alignment is hard” position is correct—it would be kinda dumb to work on AGI carelessly if you thought that position is correct. So it seems to be a matter of considering the position and deciding it is incorrect.
I am not really sure what you mean by the second paragraph. AI is being actively regulated, and there are very active lobbying efforts on behalf of the big technology companies, producing large volumes of arguments for why AI is nothing you have to worry about.
That’s interesting, but it doesn’t seem that any of the arguments they’ve made have reached LW or the EA Forum—let me know if I’m wrong. Anyway I think my original point basically stands—from the perspective of EA cause prioritization, the incentives to dismantle/refute flawed arguments for prioritizing AI safety are pretty diffuse. (True for most EA causes—I’ve long maintained that people should be paid to argue for unincentivized positions.)
There was an interesting discussion on Twitter the other day about how many AI researchers were inspired to work on AGI by AI safety arguments. Apparently they bought the “AGI is important and possible” part of the argument but not the “alignment is crazy difficult” part.
I do think the AI safety community has some unfortunate echo chamber qualities which end up filtering those people out of the discussion. This seems bad because (1) the arguments for caution might be stronger if they were developed by talking to the smartest skeptics and (2) it may be that alignment isn’t crazy difficult and the people filtered out have good ideas for tackling it.
If I had extra money, I might sponsor a prize for a “why we don’t need to worry about AI safety” essay contest to try & create an incentive to bridge the tribal gap. Could accomplish one or more of the following:
Create more cross talk between people working in AGI and people thinking about how to make it safe
Show that the best arguments for not needing to worry, as discovered by this essay contest, aren’t very good
Get more mainstream AI people thinking about safety (and potentially realizing over the course of writing their essay that it needs to be prioritized)
Get fresh sets of eyes on AI safety problems in a way that could generate new insights
Another point here is that from a cause prioritization perspective, there’s a group of people incentivized to argue that AI safety is important (anyone who gets paid to work on AI safety), but there’s not really any group of people with much of an incentive to argue the reverse (that I can think of at least, let me know if you disagree). So we should expect the set of arguments which have been published to be imbalanced. A contest could help address that.
What? What about all the people who prefer to do fun research that builds capabilities and has direct ways to make them rich, without having to consider the hypothesis that maybe they are causing harm? The incentives in the other direction easily seem 10x stronger to me.
Lobbying for people to ignore the harm that your industry is causing is standard in basically any industry, and we have a massive plethora of evidence of organizations putting lots of optimization power into arguing for why their work is going to have no downsides. See the energy industry, tobacco industry, dairy industry, farmers in general, technological incumbents, the medical industry, the construction industry, the meat-production and meat-packaging industries, and really any big industry I can think of. Downplaying risks of your technology is just standard practice for any mature industry out there.
If they’re not considering that hypothesis, that means they’re not trying to think of arguments against it. Do we disagree?
I agree if the government was seriously considering regulation of AI, the AI industry would probably lobby against this. But that’s not the same question. From a PR perspective, just ignoring critics often seems to be a good strategy.
Yes, I didn’t say “they are not considering that hypothesis”, I am saying “they don’t want to consider that hypothesis”. Those do indeed imply very different actions. I think one gives very naturally rise to producing counterarguments, the other one does not.
I am not really sure what you mean by the second paragraph. AI is being actively regulated, and there are very active lobbying efforts on behalf of the big technology companies, producing large volumes of arguments for why AI is nothing you have to worry about.
They don’t want to consider the hypothesis, and that’s why they’ll spend a bunch of time carefully considering it and trying to figure out why it is flawed?
In any case… Assuming the Twitter discussion is accurate, some people working on AGI have already thought about the “alignment is hard” position (since those expositions are how they came to work on AGI). But they don’t think the “alignment is hard” position is correct—it would be kinda dumb to work on AGI carelessly if you thought that position is correct. So it seems to be a matter of considering the position and deciding it is incorrect.
That’s interesting, but it doesn’t seem that any of the arguments they’ve made have reached LW or the EA Forum—let me know if I’m wrong. Anyway I think my original point basically stands—from the perspective of EA cause prioritization, the incentives to dismantle/refute flawed arguments for prioritizing AI safety are pretty diffuse. (True for most EA causes—I’ve long maintained that people should be paid to argue for unincentivized positions.)
I do like the idea of sponsoring a prize for such an essay contest. I’d contribute to the prize pool and help with the judging!