Thanks Nathan. I understand that most people working on technical AI-safety research focus on this specific problem, namely of aligning AI—and less on misuse. I don’t expect a large ai-misuse audience here.
Your response—that “truly-aligned-AI” would not change human intent—was also suggested by other AI researchers. But this doesn’t address the problem: human intent is created from (and dependent on) societal structures. Perhaps I failed to make this clearer. But I was trying to suggest we lack an understanding of the genesis of human actions/intentions or goals—and thus cannot properly specify how human intent is constructed—and how to protect it from interference/manipulation. A world imbued with AI-techs will change the societal landscape significantly and potentially for the worse. I think that many view human “intention” as a property of humans that acts on the world and is somehow isolated or protected from the physical and cultural world (see Fig 1a). But the opposite is actually true: in humans intent and goals are likely caused significantly more by society than biology.
The optimist statement: The best way I can interpret “truly-aligned-AI won’t change human agency” is to say that “AI” will—help humans—solve the free will problem and will then “work with us” to redesign what human goals should be. But this later statement is a very tall-order (a United Nations statement that perhaps will never see the light of day...).
Thanks Nathan. I understand that most people working on technical AI-safety research focus on this specific problem, namely of aligning AI—and less on misuse. I don’t expect a large ai-misuse audience here.
Your response—that “truly-aligned-AI” would not change human intent—was also suggested by other AI researchers. But this doesn’t address the problem: human intent is created from (and dependent on) societal structures. Perhaps I failed to make this clearer. But I was trying to suggest we lack an understanding of the genesis of human actions/intentions or goals—and thus cannot properly specify how human intent is constructed—and how to protect it from interference/manipulation. A world imbued with AI-techs will change the societal landscape significantly and potentially for the worse. I think that many view human “intention” as a property of humans that acts on the world and is somehow isolated or protected from the physical and cultural world (see Fig 1a). But the opposite is actually true: in humans intent and goals are likely caused significantly more by society than biology.
The optimist statement: The best way I can interpret “truly-aligned-AI won’t change human agency” is to say that “AI” will—help humans—solve the free will problem and will then “work with us” to redesign what human goals should be. But this later statement is a very tall-order (a United Nations statement that perhaps will never see the light of day...).