could be made or is already conceptually general enough to learn everything there is to learn
Universality of neural networks is a known result (in the sense: A basic fully-connected net with an input layer, hidden layer, and output layer can represent any function given sufficient hidden nodes).
The idea behind RG is to find a new coarse-grained description of the spin system where one has “integrated out” short distance fluctuations.
Physics has lots of structure that is local. ‘Averaging’ over local structures can reveal higher level structures.
On rereading I realized that the critical choice remains in the the way the RG is constructed. So the approach isn’t as general as I initially imagined it to be.
Universality of neural networks is a known result (in the sense: A basic fully-connected net with an input layer, hidden layer, and output layer can represent any function given sufficient hidden nodes).
Nitpick: Any continuous function on a compact set. Still, I think this should include most real-life problems.
Universality of functions: Yes (inefficiently so). But the claim made in the paper goes deeper.
Can you explain? I don’t know much about renormalization groups.
Physics has lots of structure that is local. ‘Averaging’ over local structures can reveal higher level structures. On rereading I realized that the critical choice remains in the the way the RG is constructed. So the approach isn’t as general as I initially imagined it to be.