I thought of a couple of things that I was wondering if you have considered.
It seems to me like when examining mutual information between two objects, there might be a lot of mutual information that an agent cannot use. Like there is a lot of mutual information between my present self and me in 10 minutes, but most of that is in information about myself that I am not aware of, that I cannot use for decision making.
Also, if you examine an object that is fairly constant, would you not get high mutual information for the object at different times, even though it is not very agentic? Can you differentiate autonomy and a stable object?
“The self in 10 minutes” is a good example of revealing the difference between ACI and the traditional rational intelligence model. In the rational model, the input information is send to atom-like agent, where decisions are made based on the input.
But ACI believes that’s not how real-world agents work. An agent is a complex system made up with many different parts and levels: the heart receives mechanical, chemical, and electronic information from its past self and continue beating, but with different heart rates because of some outside reasons; a cell keeps running its metabolic and functional process, which is determined by its past situation, and affected by its neighbors and chemicals in the blood; finally, the brain outputs neural signals based on its past state and new sensory information. In other words, the brain has mutual information with its past self, the body, and the outer world, but that’s only a small part of the mutual information between my present self and me in 10 minutes.
In other words, the brain uses only a tiny part of the information an agent uses. furthermore, when we talk about awareness, I am aware of only a tiny part of the information process in my brain.
An agent is not like an atom, but an onion with many layers. Decisions are made in parallel in these layers, and we are aware of only a small part of the layers. It’s even not possible to draw a solid boundary between awareness and no awareness.
The second question, a stable object may have high mutual information at different times, but may also have high mutual information with other agents. For example, a rock may be stable in size and shape, but its position and movement may highly depends on outside natural force and human behavior. However, the definition of agency is more complex than this, I will try to discuss it in the future posts.
Interesting!
I thought of a couple of things that I was wondering if you have considered.
It seems to me like when examining mutual information between two objects, there might be a lot of mutual information that an agent cannot use. Like there is a lot of mutual information between my present self and me in 10 minutes, but most of that is in information about myself that I am not aware of, that I cannot use for decision making.
Also, if you examine an object that is fairly constant, would you not get high mutual information for the object at different times, even though it is not very agentic? Can you differentiate autonomy and a stable object?
Thank you for your reply!
“The self in 10 minutes” is a good example of revealing the difference between ACI and the traditional rational intelligence model. In the rational model, the input information is send to atom-like agent, where decisions are made based on the input.
But ACI believes that’s not how real-world agents work. An agent is a complex system made up with many different parts and levels: the heart receives mechanical, chemical, and electronic information from its past self and continue beating, but with different heart rates because of some outside reasons; a cell keeps running its metabolic and functional process, which is determined by its past situation, and affected by its neighbors and chemicals in the blood; finally, the brain outputs neural signals based on its past state and new sensory information. In other words, the brain has mutual information with its past self, the body, and the outer world, but that’s only a small part of the mutual information between my present self and me in 10 minutes.
In other words, the brain uses only a tiny part of the information an agent uses. furthermore, when we talk about awareness, I am aware of only a tiny part of the information process in my brain.
An agent is not like an atom, but an onion with many layers. Decisions are made in parallel in these layers, and we are aware of only a small part of the layers. It’s even not possible to draw a solid boundary between awareness and no awareness.
The second question, a stable object may have high mutual information at different times, but may also have high mutual information with other agents. For example, a rock may be stable in size and shape, but its position and movement may highly depends on outside natural force and human behavior. However, the definition of agency is more complex than this, I will try to discuss it in the future posts.