My forecast of the net effects of “ethical” discussion is negative; I expect it to be a cheap, easy, attention-grabbing distraction from technical issues and technical thoughts that actually determine okay outcomes.
Has the net effect of global poverty discussion been negative for the x-risk movement? It seems to me that this is very much not the case. I remember Lukeprog writing that EA was one of the few groups which MIRI was able to draw supporters from.
It seems like discussion of near-term ethical issues might expand academia’s Overton window to admit more discussion of technical issues.
Reading between the lines: Eliezer thinks that the sort of next actions which discussion of near-term issues suggests will be negative for the long term?
The more I think about AI safety, the more I think that preventing an arms race is the most important thing. If you know there’s no arms race, you can take your time to make your AI as safe as you want. If you know there’s no arms race, you don’t need to implement a plan involving dangerous material actions in order to block some future AI from taking over. Furthermore, there’s a sense in which arms race incentives are well-aligned: if we get a positive singularity, that means material abundance for everyone; if there’s an AI disaster, it’s likely a disaster for everyone. So maybe all you’d have to do is convince all the relevant actors that this is true, then create common knowledge among all the relevant actors that all the relevant actors believe this. (Possible problem: relevant actors you aren’t aware of. E.g. North Korean hackers who have penetrated DeepMind. Is it possible to improve the state of secret-keeping technology?)
Eliezer was talking about discussions about ethics of AGI, and it sounds like you misinterpreted him as talking about discussions about ethics of narrow AI.
Also, I’m skeptical that bringing up narrow AI ethical issues is helpful for shifting academia’s Overton window to include existential risk from AI as a serious threat, and I suspect it may be counterproductive. Associating existential risk with narrow AI ethics seems to lead to people using the latter to derail discussions of the former. People sometimes dismiss concerns about existential risk from AI and then suggest that something should be done about some narrow AI ethical issue, and I suspect that they think they are offering a reasonable olive branch to people concerned about existential risk, despite their suggestions being useless for the purposes of existential risk reduction. This sort of thing would happen less if existential risk and ethics of narrow AI were less closely associated with each other.
Has the net effect of global poverty discussion been negative for the x-risk movement? It seems to me that this is very much not the case. I remember Lukeprog writing that EA was one of the few groups which MIRI was able to draw supporters from.
It seems like discussion of near-term ethical issues might expand academia’s Overton window to admit more discussion of technical issues.
Reading between the lines: Eliezer thinks that the sort of next actions which discussion of near-term issues suggests will be negative for the long term?
The more I think about AI safety, the more I think that preventing an arms race is the most important thing. If you know there’s no arms race, you can take your time to make your AI as safe as you want. If you know there’s no arms race, you don’t need to implement a plan involving dangerous material actions in order to block some future AI from taking over. Furthermore, there’s a sense in which arms race incentives are well-aligned: if we get a positive singularity, that means material abundance for everyone; if there’s an AI disaster, it’s likely a disaster for everyone. So maybe all you’d have to do is convince all the relevant actors that this is true, then create common knowledge among all the relevant actors that all the relevant actors believe this. (Possible problem: relevant actors you aren’t aware of. E.g. North Korean hackers who have penetrated DeepMind. Is it possible to improve the state of secret-keeping technology?)
Eliezer was talking about discussions about ethics of AGI, and it sounds like you misinterpreted him as talking about discussions about ethics of narrow AI.
Also, I’m skeptical that bringing up narrow AI ethical issues is helpful for shifting academia’s Overton window to include existential risk from AI as a serious threat, and I suspect it may be counterproductive. Associating existential risk with narrow AI ethics seems to lead to people using the latter to derail discussions of the former. People sometimes dismiss concerns about existential risk from AI and then suggest that something should be done about some narrow AI ethical issue, and I suspect that they think they are offering a reasonable olive branch to people concerned about existential risk, despite their suggestions being useless for the purposes of existential risk reduction. This sort of thing would happen less if existential risk and ethics of narrow AI were less closely associated with each other.