It seems to me like the simplest way to solve friendliness is: “Ok AI, I’m friendly so do what I tell you to do and confirm with me before taking any action.” It is much simpler to program a goal system that responds to direct commands than to somehow try to infuse ‘friendliness’ into the AI. Granted, marketing wise a ‘friendliness’ infused AI sounds better because it makes those who seek to build such AI seem altruistic. Anyone saying they intend to implement the former seems selfish and power hungry.
Wasn’t that trick tried with Windows Vista, and people were so annoyed by continually being asked trivial “can I do this?” questions that they turned off the security?
It seems to me like the simplest way to solve friendliness is: “Ok AI, I’m friendly so do what I tell you to do and confirm with me before taking any action.” It is much simpler to program a goal system that responds to direct commands than to somehow try to infuse ‘friendliness’ into the AI. Granted, marketing wise a ‘friendliness’ infused AI sounds better because it makes those who seek to build such AI seem altruistic. Anyone saying they intend to implement the former seems selfish and power hungry.
Wasn’t that trick tried with Windows Vista, and people were so annoyed by continually being asked trivial “can I do this?” questions that they turned off the security?