Current rough hypothesis: evolution encodes sexual attraction as a highly compressed initial ‘seed’ which unfolds over time through learning. It identifies/finds and then plugs into the relevant learned sensory concept representations which code for attractive members of the opposite sex.
How does this “seed” find the correct high-level sensory features to plug into? How can it wire complex high-level behavioral programs (such as courtship behaviors) to low-level motor programs learned by unsupervised learning? This seems unlikely.
For example you are probably aware of how you perform long multiplication, such that you could communicate the algorithm and steps.
But long multiplication is something that you were taught in school, which most humans wouldn’t be able to discover independently. And you are certainly not aware of how your brain perform visual recognition, the little you know was discovered through experiments, not introspection.
That being said, some systems—such as Atari’s DRL agent—can be considered simple early versions of ULMs.
Not so fast.
The Atari DRL agent learns a good mapping between short windows of frames and button presses. It has some generalization capability which enables it to achieve human-level or sometimes even super human-level performances on games that are based on eye-hand coordination (after all it’s not burdened by the intrinsic delays that occur in the human body), but it has no reasoning ability and fails miserably at any game which requires planning ahead more than a few frames.
Despite the name, no machine learning system, “deep” or otherwise, has been demonstrated to be able to efficiently learn any provably deep function (in the sense of boolean circuit depth-complexity), such as the parity function which any human of average intelligence could learn from a small number of examples.
I see no particular reason to believe that this could be solved by just throwing more computational power at the problem: you can’t fight exponentials that way.
UPDATE:
Now it seems that Google DeepMind managed to train even feed-forward neural networks to solve the parity problem. My other comment down-thread.
Despite the name, no machine learning system, “deep” or otherwise, has been demonstrated to be able to efficiently learn any provably deep function (in the sense of boolean circuit depth-complexity), such as the parity function which any human of average intelligence could learn from a small number of examples.
Even non-sequential problems may benefit from RNNs. For example, the problem of determining the parity of a set of bits.[15]. This is very simple with RNNs, but doing it with a feedforward neural network would require excessive complexity.
The algorithm I was referring to can be easily represented by an RNN with one hidden layer of a few nodes, the difficult part is learning it from examples.
The examples for the n-parity problem are input-output pairs where each input is a n-bit binary string and its corresponding output is a single bit representing the parity of that string.
In the code you linked, if I understand correctly, however, they solve a different machine learning problem: here the examples are input-output pairs where both the inputs and the outputs are n-bit binary strings, with the i-th output bit representing the parity of the input bits up to the i-th one.
It may look like a minor difference, but actually it makes the learning problem much easier, and in fact it basically guides the network to learn the right algorithm: the network can first learn how to solve parity on 1 bit (identity), then parity on 2 bits (xor), and so on. Since the network is very small and has an ideal architecture for that problem, after learning how to solve parity for a few bits (perhaps even two) it will generalize to arbitrary lengths.
By using this kind of supervision I bet you can also train a feed-forward neural network to solve the problem: use a training set as above except with the input and output strings presented as n-dimensional vectors rather than sequences of individual bits and make sure that the network has enough hidden layers. If you use a specialized architecture (e.g. decrease the width of the hidden layers as their depth increases and connect the i-th output node to the i-th hidden layer) it will learn quite efficiently, but if you use a more standard architecture (hidden layers of constant width and output layer connected only to the last hidden layer) it will probably also work although you will need a quite a bit of training examples to avoid overfitting.
The parity problem is artificial, but it is a representative case of problems that necessarily ( * ) require a non-trivial number of highly non-linear serial computation steps. In a real-world case (a planning problem, maybe), we wouldn’t have access to the internal state of a reference algorithm to use as supervision signals for the machine learning system. The machine learning system will have to figure the algorithm on its own, and current approaches can’t do it in a general way, even for relatively simple algorithms.
You can read the (much more informed) opinion of Ilya Sutskever on the issue here (Yoshua Bengio also participated in the comments).
( * at least for polynomial-time execution, since you can always get constant depth at the expense of an exponential blow-up of parallel nodes)
Your comments made me curious enough to download PyBrain and play around with the sample code, to see if I could modify it to learn the parity function without intermediate parity bits in the output. In the end, I was able to, by trial and error, come up with hyperparameters that allowed the RNN to learn the parity function reliably in a few minutes on my laptop (many other choices of hyperparameters caused the SGD to sometimes get stuck before it converged to a correct solution). I’ve posted the modified sample code here. (Notice that the network now has 2 input nodes, one for the input string and one to indicate end of string, 2 hidden layers with 3 and 2 nodes, and an output node.)
The machine learning system will have to figure the algorithm on its own, and current approaches can’t do it in a general way, even for relatively simple algorithms.
I guess you’re basically correct on this, since even with the tweaked hyperparameters, on the parity problem RNN+SGD isn’t really doing any better than a brute force search through the space of simple circuits or algorithms. But humans arguably aren’t very good at learning algorithms from input/output examples either. The fact that RNNs can learn the parity function, even if barely, makes it less clear that humans have any advantage at this kind of learning.
Anyway, in a paper published on arXiv yesterday, the Google DeepMind people report being able to train a feed-forward neural network to solve the parity problem, using a sophisticated gating mechanism and weight sharing between the layers. They also obtain state of the art or near state of the art results on other problems.
This result makes me update in the increasing direction my belief about the generality of neural networks.
Ah you beat me to it, I just read that paper as well.
Here is the abstract for those that haven’t read it yet:
This paper introduces Grid Long Short-Term Memory, a network of LSTM cells arranged in a multidimensional grid that can be applied to vectors, sequences or higher dimensional data such as images. The network differs from existing deep LSTM architectures in that the cells are connected between network layers as well as along the spatiotemporal dimensions of the data. It therefore provides a unified way of using LSTM for both deep and sequential computation. We apply the model to algorithmic tasks such as integer addition and determining the parity of random binary vectors. It is able to solve these problems for 15-digit integers and 250-bit vectors respectively. We then give results for three empirical tasks. We find that 2D Grid LSTM achieves 1.47 bits per character on the Wikipedia character prediction benchmark, which is state-of-the-art among neural approaches. We also observe that a two-dimensional translation model based on Grid LSTM outperforms a phrase-based reference system on a Chinese-to-English translation task, and that 3D Grid LSTM yields a near state-of-the-art error rate of 0.32% on MNIST.
Also, relevant to this discussion:
It is core to the problem that the k-bit string is given to the neural network as a whole through a single projection; considering one bit at a time and remembering the previous partial result in a recurrent or multi-step architecture reduces the problem of learning k-bit parity to the simple one of learning just 2-bit parity.
The version of the problem that humans can learn well is this easier reduction. Humans can not easily learn the hard version of the parity problem, which would correspond to a rapid test where the human is presented with a flash card with a very large number on it (60+ digits to rival the best machine result) and then must respond immediately. The fast response requirement is important to prevent using much easier multi-step serial algorithms.
You can read the (much more informed) opinion of Ilya Sutskever on the issue here (Yoshua Bengio also participated in the comments).
That is the most cogent, genuinely informative explanation of “Deep Learning” that I’ve ever heard. Most especially so regarding the bit about linear correlations: we can learn well on real problems with nothing more than stochastic gradient descent because the feature data may contain whole hierarchies of linear correlations.
How does this “seed” find the correct high-level sensory features to plug into? How can it wire complex high-level behavioral programs (such as courtship behaviors) to low-level motor programs learned by unsupervised learning?
This particular idea is not well developed yet in my mind, and I haven’t really even searched the literature yet. So keep that in mind.
Leave courtship aside, let us focus on attraction—specifically evolution needs to encode detectors which can reliably identify high quality mates of the opposite sex apart from all kinds of other objects. The problem is that a good high quality face recognizer is too complex to specify in the genome—it requires many billions of synapses, so it needs to be learned. However, the genome can encode an initial crappy face detector. It can also encode scent/pheromone detectors, and it can encode general ‘complexity’ and or symmetry detectors that sit on top, so even if it doesn’t initially know what it is seeing, it can tell when something is about yeh complex/symmetric/interesting. It can encode the equivalent of : if you see an interesting face sized object which appears for many minutes at a time and moves at this speed, and you hear complex speech like sounds, and smell human scents, it’s probably a human face.
Then the problem is reduced in scope. The cortical map will grow a good face/person model/detector on it’s own, and then after this model is ready certain hormones in adolescence activate innate routines that learn where the face/person model patch is and help other modules plug into it. This whole process can also be improved by the use of a weak top down prior described above.
That being said, some systems—such as Atari’s DRL agent—can be considered simple early versions of ULMs.
Not so fast.
Actually on consideration I think you are right and I did get ahead of myself there. The Atari agent doesn’t really have a general memory subsystem. It has an episode replay system, but not general memory. Deepmind is working on general memory—they have the NTM paper and what not, but the Atari agent came before that.
I largely agree with your assessment of the Atari DRL agent.
Despite the name, no machine learning system, “deep” or otherwise, has been demonstrated to be able to efficiently learn any provably deep function (in the sense of boolean circuit depth-complexity), such as the parity function which any human of average intelligence could learn from a small number of examples.
I highly doubt that—but it all depends on what your sampling class for ‘human’ is. An average human drawn from the roughly 10 billion alive today? Or an average human drawn from the roughly 100 billion who have ever lived? (most of which would have no idea what a parity function is).
When you imagine a human learning the parity function from a small number of examples, what you really imagine is a human who has already learned the parity function, and thus internally has ‘parity function’ as one of perhaps a thousand types of functions they have learned, such that if you give them some data, it is one of the obvious things they may try.
Training a machine on a parity data set from scratch and expecting it to learn the parity function is equivalent to it inventing the parity function—and perhaps inventing mathematics as well. It should be compared to raising an infant without any knowledge of mathematics or anything related, and then training them on the raw data.
However, the genome can encode an initial crappy face detector.
It’s not that crappy given that newborns can not only recognize faces with significant accuracy, but also recognize facial expressions.
The cortical map will grow a good face/person model/detector on it’s own, and then after this model is ready certain hormones in adolescence activate innate routines that learn where the face/person model patch is and help other modules plug into it.
Having two separate face recognition modules, one genetically specified and another learned seems redundant, and still it’s not obvious to me how a genetically-specified sexual attraction program could find how to plug into a completely learned system, which would necessarily have some degree of randomness.
It seems more likely that there is a single face recognition module which is genetically specified and then it becomes fine tuned by learning.
I highly doubt that—but it all depends on what your sampling class for ‘human’ is. An average human drawn from the roughly 10 billion alive today? Or an average human drawn from the roughly 100 billion who have ever lived? (most of which would have no idea what a parity function is).
Show a neolithic human a bunch of pebbles, some black and some white, laid out in a line. Ask them to add a black or white pebble to the line, and reward them if the number of black pebbles is even. Repeat multiple times.
Even without a concept of “even number”, wouldn’t this neolithic human be able to figure out an algorithm to compute the right answer? They just need to scan the line, flipping a mental switch for each black pebble they encounter, and then add a black pebble if and only if the switch is not in the initial position.
Maybe I’m overgeneralizing, but it seems unlikely to me that people able to invent complex hunting strategies, to build weapons, tools, traps, clothing, huts, to participate in tribe politics, etc. wouldn’t be able to figure something like that.
It’s not that crappy given that newborns can not only recognize faces with significant accuracy, but also recognize facial expressions.
Do you have a link to that? ‘Newborn’ can mean many things—the visual system starts learning from the second the eyes open, and perhaps even before that through pattern generators projected onto the retina which help to ‘pretrain’ the viscortex.
I know that infants have initial face detectors from the second they open their eyes, but from what I remember reading—they are pretty crappy indeed, and initially can’t tell a human face apart from a simple cartoon with 3 blobs for eyes and mouth.
It seems more likely that there is a single face recognition module which is genetically specified and then it becomes fine tuned by learning.
Except that it isn’t that simple, because—amongst other evidence—congenitally blind people still learn a model and recognizer for attractive people, and can discern someone’s relative beauty by scanning faces with their fingertips.
Even without a concept of “even number”, wouldn’t this neolithic human be able to figure out an algorithm to compute the right answer?
Not sure—we are getting into hypothetical scenarios here. Your visual version, with black and white pebbles laid out in a line, implicitly helps simplify the problem and may guide the priors in the right way. I am reasonably sure that this setup would also help any brain-like AGI.
Even without a concept of “even number”, wouldn’t this neolithic human be able to figure out an algorithm to compute the right answer? They just need to scan the line, flipping a mental switch for each black pebble they encounter, and then add a black pebble if and only if the switch is not in the initial position.
If I understand correctly, in the post you linked Scott is saying that Haitians are functionally innumerate, which should explain the difficulties with numerical sorting.
My point is that the partity function should be learnable even without basic numeracy, although I admit that perhaps I’m overgeneralizing.
Anyway, modern machine learning systems can learn to perform basic arithmentic such as addition and subtraction, and I think even sorting (since they are used for preordering for statstical machine translation), hence the problem doesn’t seem to be a lack of arithmetic knowledge or skill.
Note that both addition and subtraction have constant circuit depth (they are in AC0) while parity has logarithmic circuit depth.
How does this “seed” find the correct high-level sensory features to plug into? How can it wire complex high-level behavioral programs (such as courtship behaviors) to low-level motor programs learned by unsupervised learning?
This seems unlikely.
But long multiplication is something that you were taught in school, which most humans wouldn’t be able to discover independently. And you are certainly not aware of how your brain perform visual recognition, the little you know was discovered through experiments, not introspection.
Not so fast.
The Atari DRL agent learns a good mapping between short windows of frames and button presses. It has some generalization capability which enables it to achieve human-level or sometimes even super human-level performances on games that are based on eye-hand coordination (after all it’s not burdened by the intrinsic delays that occur in the human body), but it has no reasoning ability and fails miserably at any game which requires planning ahead more than a few frames.
Despite the name, no machine learning system, “deep” or otherwise, has been demonstrated to be able to efficiently learn any provably deep function (in the sense of boolean circuit depth-complexity), such as the parity function which any human of average intelligence could learn from a small number of examples.
I see no particular reason to believe that this could be solved by just throwing more computational power at the problem: you can’t fight exponentials that way.
UPDATE:
Now it seems that Google DeepMind managed to train even feed-forward neural networks to solve the parity problem. My other comment down-thread.
I had a guess that recurrent neural networks can solve the parity problem, which Google confirmed. See http://cse-wiki.unl.edu/wiki/index.php/Recurrent_neural_networks where it says:
See also PyBrain’s parity learning RNN example.
The algorithm I was referring to can be easily represented by an RNN with one hidden layer of a few nodes, the difficult part is learning it from examples.
The examples for the n-parity problem are input-output pairs where each input is a n-bit binary string and its corresponding output is a single bit representing the parity of that string.
In the code you linked, if I understand correctly, however, they solve a different machine learning problem: here the examples are input-output pairs where both the inputs and the outputs are n-bit binary strings, with the i-th output bit representing the parity of the input bits up to the i-th one.
It may look like a minor difference, but actually it makes the learning problem much easier, and in fact it basically guides the network to learn the right algorithm:
the network can first learn how to solve parity on 1 bit (identity), then parity on 2 bits (xor), and so on. Since the network is very small and has an ideal architecture for that problem, after learning how to solve parity for a few bits (perhaps even two) it will generalize to arbitrary lengths.
By using this kind of supervision I bet you can also train a feed-forward neural network to solve the problem: use a training set as above except with the input and output strings presented as n-dimensional vectors rather than sequences of individual bits and make sure that the network has enough hidden layers.
If you use a specialized architecture (e.g. decrease the width of the hidden layers as their depth increases and connect the i-th output node to the i-th hidden layer) it will learn quite efficiently, but if you use a more standard architecture (hidden layers of constant width and output layer connected only to the last hidden layer) it will probably also work although you will need a quite a bit of training examples to avoid overfitting.
The parity problem is artificial, but it is a representative case of problems that necessarily ( * ) require a non-trivial number of highly non-linear serial computation steps. In a real-world case (a planning problem, maybe), we wouldn’t have access to the internal state of a reference algorithm to use as supervision signals for the machine learning system. The machine learning system will have to figure the algorithm on its own, and current approaches can’t do it in a general way, even for relatively simple algorithms.
You can read the (much more informed) opinion of Ilya Sutskever on the issue here (Yoshua Bengio also participated in the comments).
( * at least for polynomial-time execution, since you can always get constant depth at the expense of an exponential blow-up of parallel nodes)
Your comments made me curious enough to download PyBrain and play around with the sample code, to see if I could modify it to learn the parity function without intermediate parity bits in the output. In the end, I was able to, by trial and error, come up with hyperparameters that allowed the RNN to learn the parity function reliably in a few minutes on my laptop (many other choices of hyperparameters caused the SGD to sometimes get stuck before it converged to a correct solution). I’ve posted the modified sample code here. (Notice that the network now has 2 input nodes, one for the input string and one to indicate end of string, 2 hidden layers with 3 and 2 nodes, and an output node.)
I guess you’re basically correct on this, since even with the tweaked hyperparameters, on the parity problem RNN+SGD isn’t really doing any better than a brute force search through the space of simple circuits or algorithms. But humans arguably aren’t very good at learning algorithms from input/output examples either. The fact that RNNs can learn the parity function, even if barely, makes it less clear that humans have any advantage at this kind of learning.
Nice work!
Anyway, in a paper published on arXiv yesterday, the Google DeepMind people report being able to train a feed-forward neural network to solve the parity problem, using a sophisticated gating mechanism and weight sharing between the layers. They also obtain state of the art or near state of the art results on other problems.
This result makes me update in the increasing direction my belief about the generality of neural networks.
Ah you beat me to it, I just read that paper as well.
Here is the abstract for those that haven’t read it yet:
Also, relevant to this discussion:
The version of the problem that humans can learn well is this easier reduction. Humans can not easily learn the hard version of the parity problem, which would correspond to a rapid test where the human is presented with a flash card with a very large number on it (60+ digits to rival the best machine result) and then must respond immediately. The fast response requirement is important to prevent using much easier multi-step serial algorithms.
That is the most cogent, genuinely informative explanation of “Deep Learning” that I’ve ever heard. Most especially so regarding the bit about linear correlations: we can learn well on real problems with nothing more than stochastic gradient descent because the feature data may contain whole hierarchies of linear correlations.
This particular idea is not well developed yet in my mind, and I haven’t really even searched the literature yet. So keep that in mind.
Leave courtship aside, let us focus on attraction—specifically evolution needs to encode detectors which can reliably identify high quality mates of the opposite sex apart from all kinds of other objects. The problem is that a good high quality face recognizer is too complex to specify in the genome—it requires many billions of synapses, so it needs to be learned. However, the genome can encode an initial crappy face detector. It can also encode scent/pheromone detectors, and it can encode general ‘complexity’ and or symmetry detectors that sit on top, so even if it doesn’t initially know what it is seeing, it can tell when something is about yeh complex/symmetric/interesting. It can encode the equivalent of : if you see an interesting face sized object which appears for many minutes at a time and moves at this speed, and you hear complex speech like sounds, and smell human scents, it’s probably a human face.
Then the problem is reduced in scope. The cortical map will grow a good face/person model/detector on it’s own, and then after this model is ready certain hormones in adolescence activate innate routines that learn where the face/person model patch is and help other modules plug into it. This whole process can also be improved by the use of a weak top down prior described above.
Actually on consideration I think you are right and I did get ahead of myself there. The Atari agent doesn’t really have a general memory subsystem. It has an episode replay system, but not general memory. Deepmind is working on general memory—they have the NTM paper and what not, but the Atari agent came before that.
I largely agree with your assessment of the Atari DRL agent.
I highly doubt that—but it all depends on what your sampling class for ‘human’ is. An average human drawn from the roughly 10 billion alive today? Or an average human drawn from the roughly 100 billion who have ever lived? (most of which would have no idea what a parity function is).
When you imagine a human learning the parity function from a small number of examples, what you really imagine is a human who has already learned the parity function, and thus internally has ‘parity function’ as one of perhaps a thousand types of functions they have learned, such that if you give them some data, it is one of the obvious things they may try.
Training a machine on a parity data set from scratch and expecting it to learn the parity function is equivalent to it inventing the parity function—and perhaps inventing mathematics as well. It should be compared to raising an infant without any knowledge of mathematics or anything related, and then training them on the raw data.
It’s not that crappy given that newborns can not only recognize faces with significant accuracy, but also recognize facial expressions.
Having two separate face recognition modules, one genetically specified and another learned seems redundant, and still it’s not obvious to me how a genetically-specified sexual attraction program could find how to plug into a completely learned system, which would necessarily have some degree of randomness.
It seems more likely that there is a single face recognition module which is genetically specified and then it becomes fine tuned by learning.
Show a neolithic human a bunch of pebbles, some black and some white, laid out in a line. Ask them to add a black or white pebble to the line, and reward them if the number of black pebbles is even. Repeat multiple times.
Even without a concept of “even number”, wouldn’t this neolithic human be able to figure out an algorithm to compute the right answer? They just need to scan the line, flipping a mental switch for each black pebble they encounter, and then add a black pebble if and only if the switch is not in the initial position.
Maybe I’m overgeneralizing, but it seems unlikely to me that people able to invent complex hunting strategies, to build weapons, tools, traps, clothing, huts, to participate in tribe politics, etc. wouldn’t be able to figure something like that.
Do you have a link to that? ‘Newborn’ can mean many things—the visual system starts learning from the second the eyes open, and perhaps even before that through pattern generators projected onto the retina which help to ‘pretrain’ the viscortex.
I know that infants have initial face detectors from the second they open their eyes, but from what I remember reading—they are pretty crappy indeed, and initially can’t tell a human face apart from a simple cartoon with 3 blobs for eyes and mouth.
Except that it isn’t that simple, because—amongst other evidence—congenitally blind people still learn a model and recognizer for attractive people, and can discern someone’s relative beauty by scanning faces with their fingertips.
Not sure—we are getting into hypothetical scenarios here. Your visual version, with black and white pebbles laid out in a line, implicitly helps simplify the problem and may guide the priors in the right way. I am reasonably sure that this setup would also help any brain-like AGI.
Well, given how hard it is for Haitians to understand numerical sorting...
If I understand correctly, in the post you linked Scott is saying that Haitians are functionally innumerate, which should explain the difficulties with numerical sorting.
My point is that the partity function should be learnable even without basic numeracy, although I admit that perhaps I’m overgeneralizing.
Anyway, modern machine learning systems can learn to perform basic arithmentic such as addition and subtraction, and I think even sorting (since they are used for preordering for statstical machine translation), hence the problem doesn’t seem to be a lack of arithmetic knowledge or skill.
Note that both addition and subtraction have constant circuit depth (they are in AC0) while parity has logarithmic circuit depth.