There is a more sinister interpretation of the idea of the mind as universal learning machine. That is, it is a pure blank neural net of some relatively simple architecture, which maps inputs to outputs. Recently there were an attempts to create self-driving car AIs using such approach: they just showed to the blank neural net hundreds of thousands of hours of driving and it has trained to predict the correct driver behaviour in any incoming situation. Such car-driving nets produced good performance (but still worse than advanced systems with Lidars, and hand-coded rules above them) so they are used by hobbists.
It has the following implications:
1) The secret of being human is in the dataset, not in human brain, and seriously damaged or altered brain could still learn how to be human (some autists have wrong wiring on the neuron levels, but after extensive training they could become functional humans). Even an animal could partly do it (Koko).
2) Humans don’t have “free will”, “values” or even “think”—they repeat replies that is encoded in their dataset. To “program” human, you need to write very long book (like Bible), but one’s mind can’t be changes with short text.
3) This explains very slow progress of Homo Sapiens between 1.5 mln years ago and 0.1 mln, when the form of spear-heads barely changed, and exponentially quicker progress after. In paleolite humans have very limited training dataset. After it, they created cave art (very slowly—they started from creating a models of dead animals from bones and spent millennia to come to the idea of drawings) and some other forms of enriched dataset. Dataset started to grow exponentially, and it started to include the idea of “creating new” (not idea, but a number of example how to do it).
4)We could create an AI which will mimics human behaviour by training rather simple (but large) neural net on the human dataset, like recordings of a child growth. It can’t be aligned, as it doesn’t have explicitly represented values, so it will be a compete black box, but it could have ethical behaviour, if it is trained on the “ethical dataset”.
5) This means that available hardware and data is what needed for AI creation.
There is a more sinister interpretation of the idea of the mind as universal learning machine. That is, it is a pure blank neural net of some relatively simple architecture, which maps inputs to outputs. Recently there were an attempts to create self-driving car AIs using such approach: they just showed to the blank neural net hundreds of thousands of hours of driving and it has trained to predict the correct driver behaviour in any incoming situation. Such car-driving nets produced good performance (but still worse than advanced systems with Lidars, and hand-coded rules above them) so they are used by hobbists.
It has the following implications:
1) The secret of being human is in the dataset, not in human brain, and seriously damaged or altered brain could still learn how to be human (some autists have wrong wiring on the neuron levels, but after extensive training they could become functional humans). Even an animal could partly do it (Koko).
2) Humans don’t have “free will”, “values” or even “think”—they repeat replies that is encoded in their dataset. To “program” human, you need to write very long book (like Bible), but one’s mind can’t be changes with short text.
3) This explains very slow progress of Homo Sapiens between 1.5 mln years ago and 0.1 mln, when the form of spear-heads barely changed, and exponentially quicker progress after. In paleolite humans have very limited training dataset. After it, they created cave art (very slowly—they started from creating a models of dead animals from bones and spent millennia to come to the idea of drawings) and some other forms of enriched dataset. Dataset started to grow exponentially, and it started to include the idea of “creating new” (not idea, but a number of example how to do it).
4)We could create an AI which will mimics human behaviour by training rather simple (but large) neural net on the human dataset, like recordings of a child growth. It can’t be aligned, as it doesn’t have explicitly represented values, so it will be a compete black box, but it could have ethical behaviour, if it is trained on the “ethical dataset”.
5) This means that available hardware and data is what needed for AI creation.