Silas, let me try to give you a little more explicit answer. This is how I think it is meant to work, although I agree that the description is rather unclear.
Each dot in the diagram is an “artificial neuron”. This is a little machine that has N inputs and one output, all of which are numbers. It also has an internal “threshold” value, which is also a number. The way it works is it computes a “weighted sum” of its N inputs. That means that each input has a “weight”, another number. It multplies weight 1 times input 1, plus weight 2 times input 2, plus weight 3 times input 3, and so on, to get the weighted sum. (Note that weights can also be negative, so some inputs can lower the sum.) It then compares this with the threshold value. If the sum is greater than the threshold, it outputs 1, otherwise it outputs 0. If a neuron’s output is a 1 we say it is “firing” or “activated”.
The diagram shows how the ANs are hooked up into a network, an ANN. Each neuron in Figure 1 has 5 inputs. 4 of them come from the other 4 neurons in the circuit and are represented by the lines. The 5th comes from the particular characteristic which is assigned to that neuron, i.e. color, luminance, etc. If the object has that property, that 5th input is a 1, else a 0. All of the connections in this network are bidirectional, so that neuron 1 receives input from neuron 2, while neuron 2 receives input from neuron 1, etc.
So to think about what this network does, we imagine inputting the 5 qualities which are observed about an object to the “5th” input of each of the 5 neurons. We imagine that the current output levels of all the neurons are set to something arbitrary, let’s just say zero. And perhaps initially the weights and threshold values are also quite random.
When we give the neurons this activation pattern, some of them may end up firing and some may not, depending on how the weights and thresholds are set up. And once a neuron starts firing, that feeds into one of the inputs of the other 4 neurons, which may change their own state. That feeds back through the network as well. This may lead to oscillation or an unstable state, but hopefully it will settle down into some pattern.
Now, according to various rules, we will typically adjust the weights. There are different ways to do this, but I think the concept in this example is that we will try to make the output of each neuron match its “5th input”, the object characteristic assigned to that neuron. We want the luminance neuron to activate when the object is luminous, and so on. So we increase weights that will tend to move the output in that direction, decrease weights that would move it the other way, tweak the thresholds a bit. We do this repeatedly with different objects, making small changes to the weights—this is “training” the network. Eventually it hopefully settles down and does pretty much what we want it to.
Now we can give it some wrong or ambiguous inputs, and ideally it will still produce the output that is supposed to go there. If we input 4 of the characteristics of a blegg, the 5th neuron will also show the blegg-style output. It has “learned” the characteristics of bleggs and rubes.
In the case of Network 2, the setup is simpler—each edge neuron has just 2 inputs: its unique observed characteristic, and a feedback value from the center neuron. Each one performs its weighted-sum trick and sends its output to the center one, which has its own set of weights and a threshold that determines whether it activates or not. In this case we want to teach the center one to distinguish bleggs from rubes, so we would train it that way—adjusting the weights a little bit at a time until we find it firing when it is a blegg but not when it is a rube.
Anyway, I know this is a long explanation but I didn’t see anyone else making it explicit. Hopefully it is mostly correct.
Silas, let me try to give you a little more explicit answer. This is how I think it is meant to work, although I agree that the description is rather unclear.
Each dot in the diagram is an “artificial neuron”. This is a little machine that has N inputs and one output, all of which are numbers. It also has an internal “threshold” value, which is also a number. The way it works is it computes a “weighted sum” of its N inputs. That means that each input has a “weight”, another number. It multplies weight 1 times input 1, plus weight 2 times input 2, plus weight 3 times input 3, and so on, to get the weighted sum. (Note that weights can also be negative, so some inputs can lower the sum.) It then compares this with the threshold value. If the sum is greater than the threshold, it outputs 1, otherwise it outputs 0. If a neuron’s output is a 1 we say it is “firing” or “activated”.
The diagram shows how the ANs are hooked up into a network, an ANN. Each neuron in Figure 1 has 5 inputs. 4 of them come from the other 4 neurons in the circuit and are represented by the lines. The 5th comes from the particular characteristic which is assigned to that neuron, i.e. color, luminance, etc. If the object has that property, that 5th input is a 1, else a 0. All of the connections in this network are bidirectional, so that neuron 1 receives input from neuron 2, while neuron 2 receives input from neuron 1, etc.
So to think about what this network does, we imagine inputting the 5 qualities which are observed about an object to the “5th” input of each of the 5 neurons. We imagine that the current output levels of all the neurons are set to something arbitrary, let’s just say zero. And perhaps initially the weights and threshold values are also quite random.
When we give the neurons this activation pattern, some of them may end up firing and some may not, depending on how the weights and thresholds are set up. And once a neuron starts firing, that feeds into one of the inputs of the other 4 neurons, which may change their own state. That feeds back through the network as well. This may lead to oscillation or an unstable state, but hopefully it will settle down into some pattern.
Now, according to various rules, we will typically adjust the weights. There are different ways to do this, but I think the concept in this example is that we will try to make the output of each neuron match its “5th input”, the object characteristic assigned to that neuron. We want the luminance neuron to activate when the object is luminous, and so on. So we increase weights that will tend to move the output in that direction, decrease weights that would move it the other way, tweak the thresholds a bit. We do this repeatedly with different objects, making small changes to the weights—this is “training” the network. Eventually it hopefully settles down and does pretty much what we want it to.
Now we can give it some wrong or ambiguous inputs, and ideally it will still produce the output that is supposed to go there. If we input 4 of the characteristics of a blegg, the 5th neuron will also show the blegg-style output. It has “learned” the characteristics of bleggs and rubes.
In the case of Network 2, the setup is simpler—each edge neuron has just 2 inputs: its unique observed characteristic, and a feedback value from the center neuron. Each one performs its weighted-sum trick and sends its output to the center one, which has its own set of weights and a threshold that determines whether it activates or not. In this case we want to teach the center one to distinguish bleggs from rubes, so we would train it that way—adjusting the weights a little bit at a time until we find it firing when it is a blegg but not when it is a rube.
Anyway, I know this is a long explanation but I didn’t see anyone else making it explicit. Hopefully it is mostly correct.