No regularisation. I wrote about that in the analysis.
Without max-norm (or maxout, ladder, VAT: all forms of regularisation), BP/SGD only achieves 98.75% (from the dropout −2014- paper).
Regularisation must come from outside the system. - SO can be seen that way—or through local interactions (neighbors). Many papers clearly suggest that should improve the result.
So, as I see it, there are three possible different fairness criteria which define what we can compare your model with.
Virtually anything goes—convolutions, CNNs, pretraining on imagenet, …
Permutation-invariant models are allowed, everything else is disallowed. For instance, MLPs are ok, CNNs are forbidden, tensor decompositions are forbidden, SVMs are ok as long as the transformations used are permutation-invariant. Pre-processing is allowed as long as it’s permutation-invariant.
The restriction from the criterion 2 is enabled. Also, the model must be biologically plausible, or, shall we say, similar to the brain. Or maybe similar to how a potential brain of another creature might be? Not sure. This rules out SGD, regularization that uses norm of vectors, etc. are forbidden. Strengthening neuron connections based on something that happens locally is allowed.
Personally, I know basically nothing about the landscape of models satisfying the criterion 3.
I had to use a dataset for my explorations and MNIST was simple; and I used PI-MNIST to show an ‘impressive’ result so that people have to look at it. I expected the ‘PI’ to be understood, and it is not. Note that I could readily answer the ‘F-MNIST challenge’.
If I had just expressed an opinion on how to go about AI, the way I did in the roadmap, it would have been just, rightly, ignored. The point was to show it is not ‘ridiculous’ and the system fits with that roadmap.
I see that your last post is about complexity science. This is an example of it. The domain of application is nature. Nature is complex, and maths have difficulties with complexity. The field of chaos theory puttered in the 80s for that reason. If you want to know more about it, start with Turing morphogenesis (read the conclusion), then Prigogine. In NN, there is Kohonen.
Some things are theoretical correct, but practically useless. You know how to win the lotto, but nobody does it. Better something simple that works and can be reasoned about, even without a mathematical theory. AI is not quantum physics.
Maybe it could be said that intelligence is to cut through all the details to, then, reason using what is left, but the devil is in those details.
I am going after pure BP/SGD, so neural networks (no SVM), no convolution,...
No pre-processing either. That is changing the dataset.
It is just a POC, to make a point: you do not need mathematics for AGI. Our brain does not.
I will publish a follow-up post soon.
Also,
No regularisation. I wrote about that in the analysis.
Without max-norm (or maxout, ladder, VAT: all forms of regularisation), BP/SGD only achieves 98.75% (from the dropout −2014- paper).
Regularisation must come from outside the system. - SO can be seen that way—or through local interactions (neighbors). Many papers clearly suggest that should improve the result.
That is yet to do.
What is BP in BP/SGD?
So, as I see it, there are three possible different fairness criteria which define what we can compare your model with.
Virtually anything goes—convolutions, CNNs, pretraining on imagenet, …
Permutation-invariant models are allowed, everything else is disallowed. For instance, MLPs are ok, CNNs are forbidden, tensor decompositions are forbidden, SVMs are ok as long as the transformations used are permutation-invariant. Pre-processing is allowed as long as it’s permutation-invariant.
The restriction from the criterion 2 is enabled. Also, the model must be biologically plausible, or, shall we say, similar to the brain. Or maybe similar to how a potential brain of another creature might be? Not sure. This rules out SGD, regularization that uses norm of vectors, etc. are forbidden. Strengthening neuron connections based on something that happens locally is allowed.
Personally, I know basically nothing about the landscape of models satisfying the criterion 3.
BP is Back-Propagation.
We are completely missing the plot here.
I had to use a dataset for my explorations and MNIST was simple; and I used PI-MNIST to show an ‘impressive’ result so that people have to look at it. I expected the ‘PI’ to be understood, and it is not. Note that I could readily answer the ‘F-MNIST challenge’.
If I had just expressed an opinion on how to go about AI, the way I did in the roadmap, it would have been just, rightly, ignored. The point was to show it is not ‘ridiculous’ and the system fits with that roadmap.
I see that your last post is about complexity science. This is an example of it. The domain of application is nature. Nature is complex, and maths have difficulties with complexity. The field of chaos theory puttered in the 80s for that reason. If you want to know more about it, start with Turing morphogenesis (read the conclusion), then Prigogine. In NN, there is Kohonen.
Some things are theoretical correct, but practically useless. You know how to win the lotto, but nobody does it. Better something simple that works and can be reasoned about, even without a mathematical theory. AI is not quantum physics.
Maybe it could be said that intelligence is to cut through all the details to, then, reason using what is left, but the devil is in those details.