We teach children simple morality rules with stories of distinct good and evil behaviour. We protect children from disturbing movies that are not appropriate for their age. Why?
Because children might loose their compass in the world. First they have to create a settled morality compass. Fairy tales are told to widen the personal experience of children by examples of good and evil behaviour. When the morality base is settled children are ready for real life stories without these black/white distinctions. Children who experience a shocking event that changes everything in their life “age faster” than their peers. Education and stories try to prepare children for these kinds of events. Real life is the harder and faster way to learn. As these shocking events can cause traumas that exist the entire life we should take care educating our algorithms.
As we do not intend to get traumatized paranoid AIs it is a good idea to introduce complexity and immorality late. The first stories should build a secure morality base. If this base is tested and solid against disruptive ideas then it is time to move to stories that brake rules of morality. Parents have it easy to observe if a child is ready for a disruptive story. If the child is overwhelmed and starts weeping it was too much.
I have never heard that algorithms can express any kind of internal emotions. To understand the way an algorithm conceives a story research should not forget about internal emotional state.
I don’t mean that necessarily literally but in the sense of providing a suitable learning context at the right development phase. Think training different layers of a NN with differently advanced patterns.
As we do not intend to get traumatized paranoid AIs it is a good idea to introduce complexity and immorality late.
I’d like to know in what sense you mean an AI to be traumatized. Getting stuck in a ‘bad’ local maximum of the search space?
For real story understanding more complex models will be necessary than off-the-shelf convolutional deep NN. If these complex network structures were subjected to a traumatic event these networks will work properly as before after some time. But if something triggers the memory of this traumatic event subnetworks will run wild: Their outputs will reach extremes and will influence all other subnetworks with biases. This biases could be: Everything you observe is the opposite of what you think—you cannot trust your teacher, you cannot trust anybody, everything around you is turning against you. Try to protect yourself against this by all means available.
The effect could be that backprop learning gradients will be inverted and learning deviates from its normal functionality.
We teach children simple morality rules with stories of distinct good and evil behaviour. We protect children from disturbing movies that are not appropriate for their age. Why?
Because children might loose their compass in the world. First they have to create a settled morality compass. Fairy tales are told to widen the personal experience of children by examples of good and evil behaviour. When the morality base is settled children are ready for real life stories without these black/white distinctions. Children who experience a shocking event that changes everything in their life “age faster” than their peers. Education and stories try to prepare children for these kinds of events. Real life is the harder and faster way to learn. As these shocking events can cause traumas that exist the entire life we should take care educating our algorithms. As we do not intend to get traumatized paranoid AIs it is a good idea to introduce complexity and immorality late. The first stories should build a secure morality base. If this base is tested and solid against disruptive ideas then it is time to move to stories that brake rules of morality. Parents have it easy to observe if a child is ready for a disruptive story. If the child is overwhelmed and starts weeping it was too much.
I have never heard that algorithms can express any kind of internal emotions. To understand the way an algorithm conceives a story research should not forget about internal emotional state.
I have commented about the need of something comparable like a caregiver for an AI before: http://lesswrong.com/lw/ihx/rationality_quotes_september_2013/9r1f
I don’t mean that necessarily literally but in the sense of providing a suitable learning context at the right development phase. Think training different layers of a NN with differently advanced patterns.
I’d like to know in what sense you mean an AI to be traumatized. Getting stuck in a ‘bad’ local maximum of the search space?
For real story understanding more complex models will be necessary than off-the-shelf convolutional deep NN. If these complex network structures were subjected to a traumatic event these networks will work properly as before after some time. But if something triggers the memory of this traumatic event subnetworks will run wild: Their outputs will reach extremes and will influence all other subnetworks with biases. This biases could be: Everything you observe is the opposite of what you think—you cannot trust your teacher, you cannot trust anybody, everything around you is turning against you. Try to protect yourself against this by all means available.
The effect could be that backprop learning gradients will be inverted and learning deviates from its normal functionality.