If the AI understands psychology, it knows what motivates us. We won’t need to explicitly explain any moral conundrums or point out dichotomies. It should be able to infer this knowledge from what it knows about the human psyche. Maybe it could just browse the internet for material on this topic to inform itself of how we humans work.
The way I see it, we humans will have as little need to tell the AI what we want as ants, if they could talk, would have a need to point out to a human that they don’t want him to destroy their colony. Even the most abstract conundrums that philosophers needed centuries to even point out, much less answer, might seem obvious to the AI.
So: a sufficiently intelligent agent would be able to figure out what humans wanted. We have to make it care about what we want—and also tell it how to peacefully resolve our differences when our wishes conflict.
The AI, being supposed to turn into a benefactor for humanity as a whole, is developed in an international project instead of by a single company. This would ensure enough funding that it would be hard for a company to develop it faster, draw every AI developer to this one project, thus further eliminating competition, and reduce the chance that executive meddling causes people to get sloppy to save money.
Uh huh. So: it sounds as though you have your work cut out for you.
So: a sufficiently intelligent agent would be able to figure out what humans wanted. We have to make it care about what we want—and also tell it how to peacefully resolve our differences when our wishes conflict.
Uh huh. So: it sounds as though you have your work cut out for you.