Interesting! I think the problem is dense/​compressed information can be represented in ways in which it is not easily retrievable for a certain decoder. The standard model written in Chinese is a very compressed representation of human knowledge of the universe and completely inscrutable to me. Or take some maximally compressed code and pass it through a permutation. The information content is obviously the same but it is illegible until you reverse the permutation.
In some ways it is uniquely easy to do this to codes with maximal entropy because per definition it will be impossible to detect a pattern and recover a readable explanation.
In some ways the compressibility of NNs is a proof that a simple model exists, without revealing a understandable explanation.
I think we can have (almost) minimal yet readable model without exponentially decreasing information density as required by LDCs.
Interesting! I think the problem is dense/​compressed information can be represented in ways in which it is not easily retrievable for a certain decoder. The standard model written in Chinese is a very compressed representation of human knowledge of the universe and completely inscrutable to me.
Or take some maximally compressed code and pass it through a permutation. The information content is obviously the same but it is illegible until you reverse the permutation.
In some ways it is uniquely easy to do this to codes with maximal entropy because per definition it will be impossible to detect a pattern and recover a readable explanation.
In some ways the compressibility of NNs is a proof that a simple model exists, without revealing a understandable explanation.
I think we can have (almost) minimal yet readable model without exponentially decreasing information density as required by LDCs.