This is interesting in that it’s biologically plausible, in ways that backpropagating neural networks aren’t. But this doesn’t have state of the art performance, and I’m going to register the prediction that, when scaled and tuned, it will still fail to match the best existing scaled-and-tuned neural networks. I could be wrong about this, and the framing of how this algorithm works is different enough from the usual backpropagation that I’d be interested in the results of the experiment, but I’m not all that excited (or scared).
You report 98.9% accuract on PI-MNIST. For comparison, this 2015 paper on ladder networks claims (table 1) to have gotten 0.57% test error (99.43% accuracy) when using the full training set. I haven’t looked very hard for other papers (the permutation invariance is a weird search term), but I expect a thorough search of the literature to turn up significantly higher. I also have a preexisting belief that MNIST-in-particular has a large number of easy samples and a small number of hard samples, such that getting to ~98% is easy and then it starts getting progressively harder.
See my answer to mlem_mlem_mlem for the second part of your comment.
You are bringing another interesting point: scaling up and tuning.
As I indicated in the roadmap, nature has chosen the way of width to that of depth.
The cortical sheet is described as a 6 layers structure, but only 3 are neurons and 2 pyramidal neurons. That is not deep. Then we see columns, functional ‘zones’, ‘regions’… There is an organisation, but it is not very deep. The number of columns in each ‘zone’ is very large. Also note that the neuron is deemed ‘stochastic’, so precision is not possible. Lastly, note (sad but true) that those who got the prize worked on technical simplification for practical use.
There is two options at this stage:
We consider, as the symbolic school has since 1969, that the underlying substrate is unimportant and, if we can find mathematical ways to describe it, we will be able to reproduce it, or...
We consider that nature has done the work (we are here to attest to that), properly, and we should look at how it did it.
1986 was an acceptable compromise, for a time. 2026, will mark one century of the 5th Solvay conference.
This is interesting in that it’s biologically plausible, in ways that backpropagating neural networks aren’t. But this doesn’t have state of the art performance, and I’m going to register the prediction that, when scaled and tuned, it will still fail to match the best existing scaled-and-tuned neural networks. I could be wrong about this, and the framing of how this algorithm works is different enough from the usual backpropagation that I’d be interested in the results of the experiment, but I’m not all that excited (or scared).
You report 98.9% accuract on PI-MNIST. For comparison, this 2015 paper on ladder networks claims (table 1) to have gotten 0.57% test error (99.43% accuracy) when using the full training set. I haven’t looked very hard for other papers (the permutation invariance is a weird search term), but I expect a thorough search of the literature to turn up significantly higher. I also have a preexisting belief that MNIST-in-particular has a large number of easy samples and a small number of hard samples, such that getting to ~98% is easy and then it starts getting progressively harder.
See my answer to mlem_mlem_mlem for the second part of your comment.
You are bringing another interesting point: scaling up and tuning.
As I indicated in the roadmap, nature has chosen the way of width to that of depth.
The cortical sheet is described as a 6 layers structure, but only 3 are neurons and 2 pyramidal neurons. That is not deep. Then we see columns, functional ‘zones’, ‘regions’… There is an organisation, but it is not very deep. The number of columns in each ‘zone’ is very large. Also note that the neuron is deemed ‘stochastic’, so precision is not possible. Lastly, note (sad but true) that those who got the prize worked on technical simplification for practical use.
There is two options at this stage:
We consider, as the symbolic school has since 1969, that the underlying substrate is unimportant and, if we can find mathematical ways to describe it, we will be able to reproduce it, or...
We consider that nature has done the work (we are here to attest to that), properly, and we should look at how it did it.
1986 was an acceptable compromise, for a time. 2026, will mark one century of the 5th Solvay conference.