In short, we care about the class of observers/agents that get redundant information in a similar way.
I think we can look at the specific dynamics of the systems described here to actually get a better perspective on whether the NAH should hold or not:
I think you can think of the redundant information between you and the thing you care about as a function of all the steps in between for that information to reach you.
If we look at the question, we have a certain amount of necessary things for the (current implementation of) NAH to hold:
1. Redundant information is rare
To see if this is the case you will want to look at each of the individual interactions and analyse to what degree redundant information is passed on.
I guess the question of “how brutal is the local optimisation environment” might be good to estimate each information redundancy (A,B,C,D in the picture). Another question is, “what level of noise do I expect to be formed at each transition?” as that would tell you to what degree the redundant information is lost in noise. (they pointed this out as the current hypothesis for usefulness in the post in section 2d.)
2. The way we access said information is similar
If you can determine to what extent the information flow between two agents is similar, you can estimate a probability of natural abstractions occurring in the same way.
For example, if we use vision versus hearing, we get two different information channels & so the abstractions will most likely change. (Causal proximity of the individual functions is changed with regards to the flow of redundant information)
Based on this I would say that the question isn’t really if it is true for NNs & brains in general but that it’s rather more helpful to ask what information is abstracted with specific capabilities such as vision or access to language.
So it’s more about the class of agents that follow these constraints which is probably a sub-section of both NNs & brains in specific information environments
(My attempt at an explanation:)
In short, we care about the class of observers/agents that get redundant information in a similar way.
I think we can look at the specific dynamics of the systems described here to actually get a better perspective on whether the NAH should hold or not:
I think you can think of the redundant information between you and the thing you care about as a function of all the steps in between for that information to reach you.
If we look at the question, we have a certain amount of necessary things for the (current implementation of) NAH to hold:
1. Redundant information is rare
To see if this is the case you will want to look at each of the individual interactions and analyse to what degree redundant information is passed on.
I guess the question of “how brutal is the local optimisation environment” might be good to estimate each information redundancy (A,B,C,D in the picture). Another question is, “what level of noise do I expect to be formed at each transition?” as that would tell you to what degree the redundant information is lost in noise. (they pointed this out as the current hypothesis for usefulness in the post in section 2d.)
2. The way we access said information is similar
If you can determine to what extent the information flow between two agents is similar, you can estimate a probability of natural abstractions occurring in the same way.
For example, if we use vision versus hearing, we get two different information channels & so the abstractions will most likely change. (Causal proximity of the individual functions is changed with regards to the flow of redundant information)
Based on this I would say that the question isn’t really if it is true for NNs & brains in general but that it’s rather more helpful to ask what information is abstracted with specific capabilities such as vision or access to language.
So it’s more about the class of agents that follow these constraints which is probably a sub-section of both NNs & brains in specific information environments