Seriously, this system works ‘online’. I gave the exemple of the kids and the Dutch to illustrate that in nature, things change around us and we adjust what we know to the new conditions. A learning process should not have a stopping criteria.
The system converges, on PI-MNIST, at 3-5 million steps. To compare, recent research papers stop at 1 million, but keep in mind that we only update about 2 out of 10 groups each time, so it is equivalent.
So you can use “for( ; b<5000000 ; b++ )” instead of “while( 1 == 1 )” in the batch() function.
After convergence, it stays within a 0.1% margin forever after. You can design a stop test around that if you want, or around the fact that weights stabilise, or anything of that kind.
If you were to use a specific selection of the dataset, wait until it stabilises and, then, use the whole set, the system would ‘start learning again’ and adjust to that change. Forever.
We never stop learning.
To kill the program, shoot ‘Ctrl+C’.
Seriously, this system works ‘online’. I gave the exemple of the kids and the Dutch to illustrate that in nature, things change around us and we adjust what we know to the new conditions. A learning process should not have a stopping criteria.
The system converges, on PI-MNIST, at 3-5 million steps. To compare, recent research papers stop at 1 million, but keep in mind that we only update about 2 out of 10 groups each time, so it is equivalent.
So you can use “for( ; b<5000000 ; b++ )” instead of “while( 1 == 1 )” in the batch() function.
After convergence, it stays within a 0.1% margin forever after. You can design a stop test around that if you want, or around the fact that weights stabilise, or anything of that kind.
If you were to use a specific selection of the dataset, wait until it stabilises and, then, use the whole set, the system would ‘start learning again’ and adjust to that change. Forever.
It is a feature, not a bug.