The ideal gas law describes relations between macroscopic gas properties like temperature, volume and pressure. E.g. “if you raise the temperature and keep volume the same, pressure will go up”. The gas is actually made up of a huge number of individual particles each with their own position and velocity at any one time, but trying to understand the gas’s behavior by looking at long list of particle positions/velocities is hopeless.
Looking at a list of neural network weights is analogous to looking at particle positions/velocities. This post claims there are quantities analogous to pressure/volume/temperature for a neutral network (AFAICT it does not offer an intuitive description of what they are)
I did not write down the list of quantities because you need to go through the math to understand most of them. One very central object is the neural tangent kernel, but there are also algorithm projectors, universality classes, etc., each of which require a lengthy explanation that I decided to be beyond the scope of this post.
This feels important but after the ideal gas analogy it’s a bit beyond my vocabulary. Can you (or another commenter) distill a bit for a dummy?
The ideal gas law describes relations between macroscopic gas properties like temperature, volume and pressure. E.g. “if you raise the temperature and keep volume the same, pressure will go up”. The gas is actually made up of a huge number of individual particles each with their own position and velocity at any one time, but trying to understand the gas’s behavior by looking at long list of particle positions/velocities is hopeless.
Looking at a list of neural network weights is analogous to looking at particle positions/velocities. This post claims there are quantities analogous to pressure/volume/temperature for a neutral network (AFAICT it does not offer an intuitive description of what they are)
I did not write down the list of quantities because you need to go through the math to understand most of them. One very central object is the neural tangent kernel, but there are also algorithm projectors, universality classes, etc., each of which require a lengthy explanation that I decided to be beyond the scope of this post.
I think 3blue1brown’s videos give a good first introduction about neural nets (the “atomic” description):
Does this help?