Eliezer was talking about discussions about ethics of AGI, and it sounds like you misinterpreted him as talking about discussions about ethics of narrow AI.
Also, I’m skeptical that bringing up narrow AI ethical issues is helpful for shifting academia’s Overton window to include existential risk from AI as a serious threat, and I suspect it may be counterproductive. Associating existential risk with narrow AI ethics seems to lead to people using the latter to derail discussions of the former. People sometimes dismiss concerns about existential risk from AI and then suggest that something should be done about some narrow AI ethical issue, and I suspect that they think they are offering a reasonable olive branch to people concerned about existential risk, despite their suggestions being useless for the purposes of existential risk reduction. This sort of thing would happen less if existential risk and ethics of narrow AI were less closely associated with each other.
Eliezer was talking about discussions about ethics of AGI, and it sounds like you misinterpreted him as talking about discussions about ethics of narrow AI.
Also, I’m skeptical that bringing up narrow AI ethical issues is helpful for shifting academia’s Overton window to include existential risk from AI as a serious threat, and I suspect it may be counterproductive. Associating existential risk with narrow AI ethics seems to lead to people using the latter to derail discussions of the former. People sometimes dismiss concerns about existential risk from AI and then suggest that something should be done about some narrow AI ethical issue, and I suspect that they think they are offering a reasonable olive branch to people concerned about existential risk, despite their suggestions being useless for the purposes of existential risk reduction. This sort of thing would happen less if existential risk and ethics of narrow AI were less closely associated with each other.