Another point here is that from a cause prioritization perspective, there’s a group of people incentivized to argue that AI safety is important (anyone who gets paid to work on AI safety), but there’s not really any group of people with much of an incentive to argue the reverse (that I can think of at least, let me know if you disagree).
What? What about all the people who prefer to do fun research that builds capabilities and has direct ways to make them rich, without having to consider the hypothesis that maybe they are causing harm? The incentives in the other direction easily seem 10x stronger to me.
Lobbying for people to ignore the harm that your industry is causing is standard in basically any industry, and we have a massive plethora of evidence of organizations putting lots of optimization power into arguing for why their work is going to have no downsides. See the energy industry, tobacco industry, dairy industry, farmers in general, technological incumbents, the medical industry, the construction industry, the meat-production and meat-packaging industries, and really any big industry I can think of. Downplaying risks of your technology is just standard practice for any mature industry out there.
What? What about all the people who prefer to do fun research that builds capabilities and has direct ways to make them rich, without having to consider the hypothesis that maybe they are causing harm?
If they’re not considering that hypothesis, that means they’re not trying to think of arguments against it. Do we disagree?
I agree if the government was seriously considering regulation of AI, the AI industry would probably lobby against this. But that’s not the same question. From a PR perspective, just ignoring critics often seems to be a good strategy.
Yes, I didn’t say “they are not considering that hypothesis”, I am saying “they don’t want to consider that hypothesis”. Those do indeed imply very different actions. I think one gives very naturally rise to producing counterarguments, the other one does not.
I am not really sure what you mean by the second paragraph. AI is being actively regulated, and there are very active lobbying efforts on behalf of the big technology companies, producing large volumes of arguments for why AI is nothing you have to worry about.
Yes, I didn’t say “they are not considering that hypothesis”, I am saying “they don’t want to consider that hypothesis”. Those do indeed imply very different actions. I think one gives very naturally rise to producing counterarguments, the other one does not.
They don’t want to consider the hypothesis, and that’s why they’ll spend a bunch of time carefully considering it and trying to figure out why it is flawed?
In any case… Assuming the Twitter discussion is accurate, some people working on AGI have already thought about the “alignment is hard” position (since those expositions are how they came to work on AGI). But they don’t think the “alignment is hard” position is correct—it would be kinda dumb to work on AGI carelessly if you thought that position is correct. So it seems to be a matter of considering the position and deciding it is incorrect.
I am not really sure what you mean by the second paragraph. AI is being actively regulated, and there are very active lobbying efforts on behalf of the big technology companies, producing large volumes of arguments for why AI is nothing you have to worry about.
That’s interesting, but it doesn’t seem that any of the arguments they’ve made have reached LW or the EA Forum—let me know if I’m wrong. Anyway I think my original point basically stands—from the perspective of EA cause prioritization, the incentives to dismantle/refute flawed arguments for prioritizing AI safety are pretty diffuse. (True for most EA causes—I’ve long maintained that people should be paid to argue for unincentivized positions.)
What? What about all the people who prefer to do fun research that builds capabilities and has direct ways to make them rich, without having to consider the hypothesis that maybe they are causing harm? The incentives in the other direction easily seem 10x stronger to me.
Lobbying for people to ignore the harm that your industry is causing is standard in basically any industry, and we have a massive plethora of evidence of organizations putting lots of optimization power into arguing for why their work is going to have no downsides. See the energy industry, tobacco industry, dairy industry, farmers in general, technological incumbents, the medical industry, the construction industry, the meat-production and meat-packaging industries, and really any big industry I can think of. Downplaying risks of your technology is just standard practice for any mature industry out there.
If they’re not considering that hypothesis, that means they’re not trying to think of arguments against it. Do we disagree?
I agree if the government was seriously considering regulation of AI, the AI industry would probably lobby against this. But that’s not the same question. From a PR perspective, just ignoring critics often seems to be a good strategy.
Yes, I didn’t say “they are not considering that hypothesis”, I am saying “they don’t want to consider that hypothesis”. Those do indeed imply very different actions. I think one gives very naturally rise to producing counterarguments, the other one does not.
I am not really sure what you mean by the second paragraph. AI is being actively regulated, and there are very active lobbying efforts on behalf of the big technology companies, producing large volumes of arguments for why AI is nothing you have to worry about.
They don’t want to consider the hypothesis, and that’s why they’ll spend a bunch of time carefully considering it and trying to figure out why it is flawed?
In any case… Assuming the Twitter discussion is accurate, some people working on AGI have already thought about the “alignment is hard” position (since those expositions are how they came to work on AGI). But they don’t think the “alignment is hard” position is correct—it would be kinda dumb to work on AGI carelessly if you thought that position is correct. So it seems to be a matter of considering the position and deciding it is incorrect.
That’s interesting, but it doesn’t seem that any of the arguments they’ve made have reached LW or the EA Forum—let me know if I’m wrong. Anyway I think my original point basically stands—from the perspective of EA cause prioritization, the incentives to dismantle/refute flawed arguments for prioritizing AI safety are pretty diffuse. (True for most EA causes—I’ve long maintained that people should be paid to argue for unincentivized positions.)