What do you actually think memories are? Memories are simply reconstructions of a prior state of the system. When you remember something, your brain literally returns at least partially to the neural state of activation that it was in which you originally perceived the event you are remembering.
What do you think the “pointer” or “key” to a memory in the human brain is? Generally, it involves priming. Priming is simply presenting a stimulus that has been associated with the prior state.
The “persistent change” you’re looking for is exactly how artificial neural networks learn. They change the strength of the connections between the neurons.
Symbol processing is completely possible with an associative network system. The symbol is encoded as a particular pattern of neuronal activations. The visual letter “A” is actually a state in the visual cortex when a certain combination of neurons are firing in response to the pattern of brightness contrast signals that rod and cone cells generate when we see an “A”. The sound “A” is similarly encoded and our brain learns to associate the two together. Eventually, there is a higher layer neuron, or pattern of neurons that activate most strongly when we see or hear an “A”, and this “symbol” can then be combined or associated with other symbols to create words or otherwise processed by the brain.
You don’t need some special mechanism. An associative memory can store any memory input pattern completely, assuming it has enough neurons in enough layers to reconstruct most of the possible states of input.
Key or Pointer based memory retrieval can be completely duplicated by just associating the key or pointer to the memory state, such that priming the network with the key or pointer reconstructs the original state.
Key or Pointer based memory retrieval can be completely duplicated by just associating the key or pointer to the memory state, such that priming the network with the key or pointer reconstructs the original state.
Yes this is why I said you can implement general-purpose memory with associative memory. However, you need two additional mechanisms which the naive associative view doesn’t address: You need the ability to create a pointer for a newly-generated memory and to associate this together with the memory. The basic RNN-based associative memory formulation does not have this mechanism, and we have no idea what form this mechanism takes in the brain. Also, you need the ability to work directly on pointers and to store pointers themselves in memory locations which can then be pointed to. However, this is more a processing constraint.
You’re assuming that a Von Neumann Architecture is a more general-purpose memory than an associative memory system, when in fact, it’s the other way around.
To get your pointer-based memory, you just have to construct a pointer as a specific compression or encoding of the memory in the associative network. For instance, you could mentally associate the number 2015 with a series of memories that have occurred in the last six months. In the future, you could then retrieve all memories that have been “hashed” to that number just by being primed with the number.
Remember that even on a computer, a pointer is simply a numerical value that represents the “address” of the particular segment of data that we want to retrieve. In that sense, it is a symbol that connects to and represents some symbols, not unlike a variable or function.
We can model this easily in an associative memory without any additional mechanisms, simply by having a multi-layer model that can combine and abstract different features of the input space into what are essentially symbols or abstract representations.
Von Neumann Architecture digital computers are nothing more than physical symbol processing systems. Which is to say that it is just one of many possible implementations of Turing Machines. According to Hava Siegelmann, a recurrent neural network with real precision weights would be, theoretically speaking, a Super Turing Machine.
If that isn’t enough, there are already models called Neural Turing Machines that combine recurrent neural networks with the Von Neumann memory model to create networks that can directly interface with pointer-based memory.
To get your pointer-based memory, you just have to construct a pointer as a specific compression or encoding of the memory in the associative network.
Again, that’s what I’m saying. How do you get from a memory to a pointer? We do not yet know how the brain does this. We have models that can do this, but very little experimental data. We of course know that it’s possible, we just don’t know the form this mechanism takes in the brain.
You’re assuming that a Von Neumann Architecture is a more general-purpose memory than an associative memory system, when in fact, it’s the other way around.
I’m assuming nothing of the sort. I’m not talking about which kind of memory is more general purpose (and, really, you have to take into account memory plus processing to be able to talk about generality in this sense). I’m talking about what the brain does. The usual ‘associative memory’ view says that all we have is an associative/content-addressable memory system. That’s fine, but it’s like saying the brain is made up of neurons. It lacks descriptive power. I want to know the specifics of how memory formation and recall happens, not hand-waving. Theoretical descriptions can help, but without experimental evidence they are of limited utility in understanding the brain.
That’s why the Hesslow experiment is so intriguing: It is actual experimental evidence that clearly illustrates what a single neuron is capable of learning and shows that even when it comes to such a drastically reduced and simplified system, our understanding is still very limited.
According to Hava Siegelmann, a recurrent neural network with real precision weights would be, theoretically speaking, a Super Turing Machine.
This is irrelevant as real precision weights are physically impossible.
What do you actually think memories are? Memories are simply reconstructions of a prior state of the system. When you remember something, your brain literally returns at least partially to the neural state of activation that it was in which you originally perceived the event you are remembering.
What do you think the “pointer” or “key” to a memory in the human brain is? Generally, it involves priming. Priming is simply presenting a stimulus that has been associated with the prior state.
The “persistent change” you’re looking for is exactly how artificial neural networks learn. They change the strength of the connections between the neurons.
Symbol processing is completely possible with an associative network system. The symbol is encoded as a particular pattern of neuronal activations. The visual letter “A” is actually a state in the visual cortex when a certain combination of neurons are firing in response to the pattern of brightness contrast signals that rod and cone cells generate when we see an “A”. The sound “A” is similarly encoded and our brain learns to associate the two together. Eventually, there is a higher layer neuron, or pattern of neurons that activate most strongly when we see or hear an “A”, and this “symbol” can then be combined or associated with other symbols to create words or otherwise processed by the brain.
You don’t need some special mechanism. An associative memory can store any memory input pattern completely, assuming it has enough neurons in enough layers to reconstruct most of the possible states of input.
Key or Pointer based memory retrieval can be completely duplicated by just associating the key or pointer to the memory state, such that priming the network with the key or pointer reconstructs the original state.
Yes this is why I said you can implement general-purpose memory with associative memory. However, you need two additional mechanisms which the naive associative view doesn’t address: You need the ability to create a pointer for a newly-generated memory and to associate this together with the memory. The basic RNN-based associative memory formulation does not have this mechanism, and we have no idea what form this mechanism takes in the brain. Also, you need the ability to work directly on pointers and to store pointers themselves in memory locations which can then be pointed to. However, this is more a processing constraint.
You’re assuming that a Von Neumann Architecture is a more general-purpose memory than an associative memory system, when in fact, it’s the other way around.
To get your pointer-based memory, you just have to construct a pointer as a specific compression or encoding of the memory in the associative network. For instance, you could mentally associate the number 2015 with a series of memories that have occurred in the last six months. In the future, you could then retrieve all memories that have been “hashed” to that number just by being primed with the number.
Remember that even on a computer, a pointer is simply a numerical value that represents the “address” of the particular segment of data that we want to retrieve. In that sense, it is a symbol that connects to and represents some symbols, not unlike a variable or function.
We can model this easily in an associative memory without any additional mechanisms, simply by having a multi-layer model that can combine and abstract different features of the input space into what are essentially symbols or abstract representations.
Von Neumann Architecture digital computers are nothing more than physical symbol processing systems. Which is to say that it is just one of many possible implementations of Turing Machines. According to Hava Siegelmann, a recurrent neural network with real precision weights would be, theoretically speaking, a Super Turing Machine.
If that isn’t enough, there are already models called Neural Turing Machines that combine recurrent neural networks with the Von Neumann memory model to create networks that can directly interface with pointer-based memory.
Again, that’s what I’m saying. How do you get from a memory to a pointer? We do not yet know how the brain does this. We have models that can do this, but very little experimental data. We of course know that it’s possible, we just don’t know the form this mechanism takes in the brain.
I’m assuming nothing of the sort. I’m not talking about which kind of memory is more general purpose (and, really, you have to take into account memory plus processing to be able to talk about generality in this sense). I’m talking about what the brain does. The usual ‘associative memory’ view says that all we have is an associative/content-addressable memory system. That’s fine, but it’s like saying the brain is made up of neurons. It lacks descriptive power. I want to know the specifics of how memory formation and recall happens, not hand-waving. Theoretical descriptions can help, but without experimental evidence they are of limited utility in understanding the brain.
That’s why the Hesslow experiment is so intriguing: It is actual experimental evidence that clearly illustrates what a single neuron is capable of learning and shows that even when it comes to such a drastically reduced and simplified system, our understanding is still very limited.
This is irrelevant as real precision weights are physically impossible.