How does the batch() procedure terminate? It contains an infinite while loop, within which there is neither a return or a break statement. Maybe there are some goings-on concerning termination of threads, but that is beyond my knowledge of C or C++.
Seriously, this system works âonlineâ. I gave the exemple of the kids and the Dutch to illustrate that in nature, things change around us and we adjust what we know to the new conditions. A learning process should not have a stopping criteria.
The system converges, on PI-MNIST, at 3-5 million steps. To compare, recent research papers stop at 1 million, but keep in mind that we only update about 2 out of 10 groups each time, so it is equivalent.
So you can use âfor( ; b<5000000 ; b++ )â instead of âwhile( 1 == 1 )â in the batch() function.
After convergence, it stays within a 0.1% margin forever after. You can design a stop test around that if you want, or around the fact that weights stabilise, or anything of that kind.
If you were to use a specific selection of the dataset, wait until it stabilises and, then, use the whole set, the system would âstart learning againâ and adjust to that change. Forever.
How does the batch() procedure terminate? It contains an infinite while loop, within which there is neither a return or a break statement. Maybe there are some goings-on concerning termination of threads, but that is beyond my knowledge of C or C++.
We never stop learning.
To kill the program, shoot âCtrl+Câ.
Seriously, this system works âonlineâ. I gave the exemple of the kids and the Dutch to illustrate that in nature, things change around us and we adjust what we know to the new conditions. A learning process should not have a stopping criteria.
The system converges, on PI-MNIST, at 3-5 million steps. To compare, recent research papers stop at 1 million, but keep in mind that we only update about 2 out of 10 groups each time, so it is equivalent.
So you can use âfor( ; b<5000000 ; b++ )â instead of âwhile( 1 == 1 )â in the batch() function.
After convergence, it stays within a 0.1% margin forever after. You can design a stop test around that if you want, or around the fact that weights stabilise, or anything of that kind.
If you were to use a specific selection of the dataset, wait until it stabilises and, then, use the whole set, the system would âstart learning againâ and adjust to that change. Forever.
It is a feature, not a bug.