I found this brainstorming interesting and nothing you suggested jumped out to me as obviously wrong.
As far as formalisations of natural abstractions go, the one I’m most sympathetic to/find most natural (pun acknowledged) is the redundant information concept.
I have a separate impression that good abstraction should allow you to compress the world better (more efficient world models). And the “redundant information” idea seems to gesture in the direction of high compressibility.
The notion of intelligence as compression is an old one; I believe Marcus Hutter was the first to formalize it back in the early 2000s (this is also where AIXI comes from). The problem with Hutter’s formalism is that his definition of compressibility (Kolmogorov complexity) is uncomputable; “find the shortest Turing machine that outputs X” requires unbounded resources even if you have a halting oracle.
I believe that, in this paradigm, the NAH is fundamentally saying: well, for “natural” data, compressibility is computable; there’s some minimal representation to which any sufficiently powerful (yet still finite) model will converge. The problem is, therefore, to figure out what a sufficiently powerful model looks like.
I found this brainstorming interesting and nothing you suggested jumped out to me as obviously wrong.
As far as formalisations of natural abstractions go, the one I’m most sympathetic to/find most natural (pun acknowledged) is the redundant information concept.
I have a separate impression that good abstraction should allow you to compress the world better (more efficient world models). And the “redundant information” idea seems to gesture in the direction of high compressibility.
The notion of intelligence as compression is an old one; I believe Marcus Hutter was the first to formalize it back in the early 2000s (this is also where AIXI comes from). The problem with Hutter’s formalism is that his definition of compressibility (Kolmogorov complexity) is uncomputable; “find the shortest Turing machine that outputs X” requires unbounded resources even if you have a halting oracle.
I believe that, in this paradigm, the NAH is fundamentally saying: well, for “natural” data, compressibility is computable; there’s some minimal representation to which any sufficiently powerful (yet still finite) model will converge. The problem is, therefore, to figure out what a sufficiently powerful model looks like.
Nate Soares suggested cross entropy instead as a measure of simplicity.
While I like his idea, it’s still not computable.