While I definitely agree that a fight between humanity and AGI will never look like humanity vs AGI due to the issues with the abstraction of humanity, I do think one key disagreement I have with this comment is that I don’t think that there is no fire alarm for AGI, and in general my model is that if anything a lot of people will support very severe restrictions on AI and AI progress for safety. I think this already happened several months ago, and there people got freaked out about AI, and that was merely GPT-4. We will get a lot of fire alarms, especially via safety incidents. A lot of people are already primed for apocalyptic narratives, and if AI progresses in a big way, this will fan the flames into a potential AI-killer, supported by politicians. It’s not impossible for tech companies to defuse this, but damn is it hard to defuse.
I worry about the opposite problem, in that if existential risk concerns look less and less likely, AI regulation may nonetheless become quite severe, and the AI organizations built by LessWrongers have systematic biases that will prevent them from updating to this position.
While I definitely agree that a fight between humanity and AGI will never look like humanity vs AGI due to the issues with the abstraction of humanity, I do think one key disagreement I have with this comment is that I don’t think that there is no fire alarm for AGI, and in general my model is that if anything a lot of people will support very severe restrictions on AI and AI progress for safety. I think this already happened several months ago, and there people got freaked out about AI, and that was merely GPT-4. We will get a lot of fire alarms, especially via safety incidents. A lot of people are already primed for apocalyptic narratives, and if AI progresses in a big way, this will fan the flames into a potential AI-killer, supported by politicians. It’s not impossible for tech companies to defuse this, but damn is it hard to defuse.
I worry about the opposite problem, in that if existential risk concerns look less and less likely, AI regulation may nonetheless become quite severe, and the AI organizations built by LessWrongers have systematic biases that will prevent them from updating to this position.