I am going to answer this comment because it is the first to address the analysis section. Thank you.
I close the paragraph saying that there is no functions anywhere and it will aggrieve some. The shift I am trying to suggest is for those who want to analyse the system using mathematics, and could be dismayed by the absence of functions to work with.
Distributions can be a place to start. The quantilisers are a place to restart mathematical analysis. I gave some links to an existing field of mathematical research that is working along those lines.
Check this out: they are looking for a multi-dimensional extension to the concept. Here it is, I suggest.
Welcome onboard this IT ship to baldly go where no one as gone before !
Indeed, I just wrote ‘when it spikes’ and further as the ‘low threshold’ and no more. I work in complete isolation and some things are so obvious inside my brain that I do not consider them as non obvious to others.
It is part of the ‘when’ aspect of learning, but uses an internal state of the neuron instead of an external information from the quantilisers.
If there is little reaction to a sample in a neuron (spiking does happen slowly, or not), it is meaningless and you should ignore it. If it comes too fast, it is already ‘in’ the system and there is no point in adding to it. You are right to say the first rule is more important than the second.
Originally, there was only one threshold instead of 3. When learning, the update would only take place if the threshold was reached after a minimum of two cycles (or 3, but then it converges unbearably slowly), and only for the connections that had been active at least twice. I ‘compacted’ it for use within one cycle (to make it look simpler), so it was 50% of the threshold minimum, and then adjusted (might as well) that value by scanning around and, then, added the upper threshold, but more to limit the number of updates than to improve the accuracy (although it contributes a small bit). The best result is with 30% and 120%, whatever the size or the other parameters.
Before I write this, I quickly checked on PI-F-MNIST. It is still ongoing, but it seems to hold true even on that dataset (BTW: use quantUpP = 3.4 and quantUpN = 40.8 to get to 90.2% with 792 neurons and 90.5% with 7920).
As it seems you are interested, feel free to contact me through private message. There is plenty more in my bag than can fit in a post or comment. I can provide you some more complete code (this one is triple distilled).
Thank you very much for your interest.