I feel like a mixed approach is the most desirable. There is a risk that if the AI is allowed to simply learn from humans, we might get a greedy AI that maximizes its Facebook experience while the rest of the World keeps dying of starvation and wars. Also, our values probably evolve with time (slavery, death penalty, freedom of speech...) so we might as well try and teach the AI what our values should be rather than what they are right now. Maybe then it’s the case of developing a top-down, high level ethical system and use it to seed a neural network that then picks up patterns in more detailed scenarios?
I feel like a mixed approach is the most desirable. There is a risk that if the AI is allowed to simply learn from humans, we might get a greedy AI that maximizes its Facebook experience while the rest of the World keeps dying of starvation and wars. Also, our values probably evolve with time (slavery, death penalty, freedom of speech...) so we might as well try and teach the AI what our values should be rather than what they are right now. Maybe then it’s the case of developing a top-down, high level ethical system and use it to seed a neural network that then picks up patterns in more detailed scenarios?