What? What about all the people who prefer to do fun research that builds capabilities and has direct ways to make them rich, without having to consider the hypothesis that maybe they are causing harm?
If they’re not considering that hypothesis, that means they’re not trying to think of arguments against it. Do we disagree?
I agree if the government was seriously considering regulation of AI, the AI industry would probably lobby against this. But that’s not the same question. From a PR perspective, just ignoring critics often seems to be a good strategy.
Yes, I didn’t say “they are not considering that hypothesis”, I am saying “they don’t want to consider that hypothesis”. Those do indeed imply very different actions. I think one gives very naturally rise to producing counterarguments, the other one does not.
I am not really sure what you mean by the second paragraph. AI is being actively regulated, and there are very active lobbying efforts on behalf of the big technology companies, producing large volumes of arguments for why AI is nothing you have to worry about.
Yes, I didn’t say “they are not considering that hypothesis”, I am saying “they don’t want to consider that hypothesis”. Those do indeed imply very different actions. I think one gives very naturally rise to producing counterarguments, the other one does not.
They don’t want to consider the hypothesis, and that’s why they’ll spend a bunch of time carefully considering it and trying to figure out why it is flawed?
In any case… Assuming the Twitter discussion is accurate, some people working on AGI have already thought about the “alignment is hard” position (since those expositions are how they came to work on AGI). But they don’t think the “alignment is hard” position is correct—it would be kinda dumb to work on AGI carelessly if you thought that position is correct. So it seems to be a matter of considering the position and deciding it is incorrect.
I am not really sure what you mean by the second paragraph. AI is being actively regulated, and there are very active lobbying efforts on behalf of the big technology companies, producing large volumes of arguments for why AI is nothing you have to worry about.
That’s interesting, but it doesn’t seem that any of the arguments they’ve made have reached LW or the EA Forum—let me know if I’m wrong. Anyway I think my original point basically stands—from the perspective of EA cause prioritization, the incentives to dismantle/refute flawed arguments for prioritizing AI safety are pretty diffuse. (True for most EA causes—I’ve long maintained that people should be paid to argue for unincentivized positions.)
If they’re not considering that hypothesis, that means they’re not trying to think of arguments against it. Do we disagree?
I agree if the government was seriously considering regulation of AI, the AI industry would probably lobby against this. But that’s not the same question. From a PR perspective, just ignoring critics often seems to be a good strategy.
Yes, I didn’t say “they are not considering that hypothesis”, I am saying “they don’t want to consider that hypothesis”. Those do indeed imply very different actions. I think one gives very naturally rise to producing counterarguments, the other one does not.
I am not really sure what you mean by the second paragraph. AI is being actively regulated, and there are very active lobbying efforts on behalf of the big technology companies, producing large volumes of arguments for why AI is nothing you have to worry about.
They don’t want to consider the hypothesis, and that’s why they’ll spend a bunch of time carefully considering it and trying to figure out why it is flawed?
In any case… Assuming the Twitter discussion is accurate, some people working on AGI have already thought about the “alignment is hard” position (since those expositions are how they came to work on AGI). But they don’t think the “alignment is hard” position is correct—it would be kinda dumb to work on AGI carelessly if you thought that position is correct. So it seems to be a matter of considering the position and deciding it is incorrect.
That’s interesting, but it doesn’t seem that any of the arguments they’ve made have reached LW or the EA Forum—let me know if I’m wrong. Anyway I think my original point basically stands—from the perspective of EA cause prioritization, the incentives to dismantle/refute flawed arguments for prioritizing AI safety are pretty diffuse. (True for most EA causes—I’ve long maintained that people should be paid to argue for unincentivized positions.)