Human children respond to normal child-rearing practices the way they do because of specific functional adaptations of the human mind. This general principle applies to everything from language acquisition to parent-child bonding to acculturation. Expose a monkey, dog, fish or alien to the same environment, and you’ll get a different outcome.
Unfortunately, while the cog sci community has produced reams of evidence on this point they’ve also discovered that said adaptations are very complex, and mapping out in detail what they all are and how they work is turning out to be a long research project. Partial results exist for a lot of intriguing examples, along with data on what goes wrong when different pieces are broken, but it’s going to be awhile before we have a complete picture.
An AI researcher who claims his program will respond like a human child is implicitly claiming either that this whole body of research is wrong (in which case I want to see evidence), or that he’s somehow implemented all the necessary adaptations in code despite the fact that no one knows how they all work (yea, right). Either way, this isn’t especially credible.
Human children respond to normal child-rearing practices the way they do because of specific functional adaptations of the human mind. This general principle applies to everything from language acquisition to parent-child bonding to acculturation. Expose a monkey, dog, fish or alien to the same environment, and you’ll get a different outcome.
Unfortunately, while the cog sci community has produced reams of evidence on this point they’ve also discovered that said adaptations are very complex, and mapping out in detail what they all are and how they work is turning out to be a long research project. Partial results exist for a lot of intriguing examples, along with data on what goes wrong when different pieces are broken, but it’s going to be awhile before we have a complete picture.
An AI researcher who claims his program will respond like a human child is implicitly claiming either that this whole body of research is wrong (in which case I want to see evidence), or that he’s somehow implemented all the necessary adaptations in code despite the fact that no one knows how they all work (yea, right). Either way, this isn’t especially credible.