First of all, congrats! It’s neat that your model beats state of the art on this benchmark, and with a new method of modeling too.
I feel like this post wasn’t sufficient to convince me to use spiking models or your curriculum strategy. I think in part this is because I’m pretty jaded. The recent-ish history of machine learning includes a whole slew of comp neuro derived models, and almost always they come with two things:
SOTA on some benchmark I’ve never heard of before (but still could be valuable/interesting! -- just unpopular enough that I didn’t know it)
Strong arguments that their architecture/model/algorithm/etc is the best and will eventually beat out every other approach to AI
So it feels like I’m skeptical on priors, which is a bit useless to say out loud. I’m curious where this research goes from here, and if you do more, I hope you consider sharing it.
I do think that one of the least understood features of comp neuro models used for machine learning (of which deep neural networks are currently the top contender, but other candidates would be boltzmann machines and reservoir computing) is the inductive bias / inductive prior they bring to machine learning problems.
I think it’s possible that spiking neural networks have better inductive priors than other models, or at least better than the models we’re using today.
The sparsity you mention also is probably a good thing.
The curricula this induces is neat. My personal take with ML today is that learning from data IID is pretty crazy (imagine trying to teach a child math by randomly selecting problems from all of mathematics). It’s possible this is a better way to do it.
Note, that I only claim to reach SOTA, not to beat it.
It would be preposterous to convince anybody with this limited evidence. The goal is to raise interest so some will spend some time to look deeper into it. Most will not, of course, for many reasons, and yours is a valid one.
The advantage of this one is its simplicity. At this point any coder can take it up and build on it. This has to be turned into a new type of construction set. I would like this to provide the 15 years old of today the pleasure my first computer (machine language) gave me, and Legos before that.
You got the last bit correctly. That is what self-organisation provides: ad-hoc selection.
First of all, congrats! It’s neat that your model beats state of the art on this benchmark, and with a new method of modeling too.
I feel like this post wasn’t sufficient to convince me to use spiking models or your curriculum strategy. I think in part this is because I’m pretty jaded. The recent-ish history of machine learning includes a whole slew of comp neuro derived models, and almost always they come with two things:
SOTA on some benchmark I’ve never heard of before (but still could be valuable/interesting! -- just unpopular enough that I didn’t know it)
Strong arguments that their architecture/model/algorithm/etc is the best and will eventually beat out every other approach to AI
So it feels like I’m skeptical on priors, which is a bit useless to say out loud. I’m curious where this research goes from here, and if you do more, I hope you consider sharing it.
I do think that one of the least understood features of comp neuro models used for machine learning (of which deep neural networks are currently the top contender, but other candidates would be boltzmann machines and reservoir computing) is the inductive bias / inductive prior they bring to machine learning problems.
I think it’s possible that spiking neural networks have better inductive priors than other models, or at least better than the models we’re using today.
The sparsity you mention also is probably a good thing.
The curricula this induces is neat. My personal take with ML today is that learning from data IID is pretty crazy (imagine trying to teach a child math by randomly selecting problems from all of mathematics). It’s possible this is a better way to do it.
Thank you for the congrats, it helps.
Note, that I only claim to reach SOTA, not to beat it.
It would be preposterous to convince anybody with this limited evidence. The goal is to raise interest so some will spend some time to look deeper into it. Most will not, of course, for many reasons, and yours is a valid one.
The advantage of this one is its simplicity. At this point any coder can take it up and build on it. This has to be turned into a new type of construction set. I would like this to provide the 15 years old of today the pleasure my first computer (machine language) gave me, and Legos before that.
You got the last bit correctly. That is what self-organisation provides: ad-hoc selection.