I have seen the synapse=memristor claim before, but I don’t buy it. Perhaps a synapse that’s already connected at both ends acts like one, but consider: a synapse starts out connected at only one end, and grows, and there is computation implicit in where it ends up connecting (some sort of gradient-following most likely). And that’s without allowing for hypotheses like multichannel neurons, which increase the complexity of the synapse dramatically.
Perhaps “close synapse analogs” was a little much, I largely agree with you.
However, in the world of binary circuits it is definitely the closest analog in a relative sense. It isn’t strictly necessary though, researchers have been making artificial synapses out of transistors & capacitors long before—I recall it usually takes a few dozen for an analog synapse, but there are many possibilities.
You are right that the biological circuits are much more complex on closer inspection. However, this is also true of analog CMOS circuits. The digital design greatly simplifies everything but loses computational potential in proportion.
That being said neural circuits are even more complex than analog CMOS—for starters they can carry negative and positive action potentials. So even after a nonlinear quantization by a neuron, the long distance signals are roughly trinary, not binary, which has some obvious advantages—such as simple signed math.
Synapses can do somewhat more complicated operations as well, many have multiple neurotransmitter interactions. I’m not sure how or to what extent these effect computation.
Your link about multi-channel neurons is interesting, I’ll have to check it out. My initial impulse is “this won’t work, there’s too much noise”. I have a strong prior against models that try to significantly increase the low-level computational power of the cortical circuit simply because our computational models of the cortical circuit already work pretty well.
As for the growth, that is of course key in any model. In most i”ve seen, the connections grow based on variants of hebbian-like local mechanisms. The main relevant dynamics can be replicated with a few fairly simple cellular-automota type rules.
That being said neural circuits are even more complex than analog CMOS—for starters they can carry negative and positive action potentials.
Inhibitory synapses are basically equivalent to a “negative action potential”. They reduce (rather than increase) the likelihood of the next neuron firing.
From my understanding most synapses were more typically like an usigned scalar weight, and the sign component comes from the action potential itself—which can be depolarizing or hyper-polarizing.
A hyperpolarization has a negative sign—it lowers the membrane potential voltage, a depolarization has a positive sign—raising the membrane potential voltage.
From what I remember neurons tend to specialize in one or the other (+polarization or -polarization based on their bias level), but some synaptic interactions may be able to reverse the sign as well (change an incoming + polarization into a -polarization, or vice versa). However, I’m less sure about how common that is.
So perhaps I do not understand what you mean when you talk about a “negative action potential”
From my (admittedly rusty) neurobiology, synapses increase or decrease the likelihood of the next neuron reaching action potential and firing.
If the first neuron has a “positive” action potential and a depolarizing synapse… it will increase the likelihood of the next neuron firing (by the amount of weighting on the second neuron).
That should be fundamentally equivalent to the effect caused by a hypothetical “negative” action potential and a hyper-polarising synapse… and vice versa.
I think I my biology was more rusty than yours, I was confusing inhibitory postsynaptic potentials with “negative action potentials”. It looks like there is only one type of action potential coming out of a neuron along an axon, the positive/negative weighting occurs at synaptic junctions carrying over to integration on the dendrite.
That should be fundamentally equivalent to the effect caused by a hypothetical “negative” action potential and a hyper-polarising synapse… and vice versa.
Yes, that was what I was thinking when I (accidently) made up that term.
I have seen the synapse=memristor claim before, but I don’t buy it. Perhaps a synapse that’s already connected at both ends acts like one, but consider: a synapse starts out connected at only one end, and grows, and there is computation implicit in where it ends up connecting (some sort of gradient-following most likely). And that’s without allowing for hypotheses like multichannel neurons, which increase the complexity of the synapse dramatically.
Perhaps “close synapse analogs” was a little much, I largely agree with you.
However, in the world of binary circuits it is definitely the closest analog in a relative sense. It isn’t strictly necessary though, researchers have been making artificial synapses out of transistors & capacitors long before—I recall it usually takes a few dozen for an analog synapse, but there are many possibilities.
You are right that the biological circuits are much more complex on closer inspection. However, this is also true of analog CMOS circuits. The digital design greatly simplifies everything but loses computational potential in proportion.
That being said neural circuits are even more complex than analog CMOS—for starters they can carry negative and positive action potentials. So even after a nonlinear quantization by a neuron, the long distance signals are roughly trinary, not binary, which has some obvious advantages—such as simple signed math.
Synapses can do somewhat more complicated operations as well, many have multiple neurotransmitter interactions. I’m not sure how or to what extent these effect computation.
Your link about multi-channel neurons is interesting, I’ll have to check it out. My initial impulse is “this won’t work, there’s too much noise”. I have a strong prior against models that try to significantly increase the low-level computational power of the cortical circuit simply because our computational models of the cortical circuit already work pretty well.
As for the growth, that is of course key in any model. In most i”ve seen, the connections grow based on variants of hebbian-like local mechanisms. The main relevant dynamics can be replicated with a few fairly simple cellular-automota type rules.
Inhibitory synapses are basically equivalent to a “negative action potential”. They reduce (rather than increase) the likelihood of the next neuron firing.
From my understanding most synapses were more typically like an usigned scalar weight, and the sign component comes from the action potential itself—which can be depolarizing or hyper-polarizing.
A hyperpolarization has a negative sign—it lowers the membrane potential voltage, a depolarization has a positive sign—raising the membrane potential voltage.
From what I remember neurons tend to specialize in one or the other (+polarization or -polarization based on their bias level), but some synaptic interactions may be able to reverse the sign as well (change an incoming + polarization into a -polarization, or vice versa). However, I’m less sure about how common that is.
Ok.
So perhaps I do not understand what you mean when you talk about a “negative action potential”
From my (admittedly rusty) neurobiology, synapses increase or decrease the likelihood of the next neuron reaching action potential and firing.
If the first neuron has a “positive” action potential and a depolarizing synapse… it will increase the likelihood of the next neuron firing (by the amount of weighting on the second neuron).
That should be fundamentally equivalent to the effect caused by a hypothetical “negative” action potential and a hyper-polarising synapse… and vice versa.
I think I my biology was more rusty than yours, I was confusing inhibitory postsynaptic potentials with “negative action potentials”. It looks like there is only one type of action potential coming out of a neuron along an axon, the positive/negative weighting occurs at synaptic junctions carrying over to integration on the dendrite.
Yes, that was what I was thinking when I (accidently) made up that term.