Thank you for your reply!
“The self in 10 minutes” is a good example of revealing the difference between ACI and the traditional rational intelligence model. In the rational model, the input information is send to atom-like agent, where decisions are made based on the input.
But ACI believes that’s not how real-world agents work. An agent is a complex system made up with many different parts and levels: the heart receives mechanical, chemical, and electronic information from its past self and continue beating, but with different heart rates because of some outside reasons; a cell keeps running its metabolic and functional process, which is determined by its past situation, and affected by its neighbors and chemicals in the blood; finally, the brain outputs neural signals based on its past state and new sensory information. In other words, the brain has mutual information with its past self, the body, and the outer world, but that’s only a small part of the mutual information between my present self and me in 10 minutes.
In other words, the brain uses only a tiny part of the information an agent uses. furthermore, when we talk about awareness, I am aware of only a tiny part of the information process in my brain.
An agent is not like an atom, but an onion with many layers. Decisions are made in parallel in these layers, and we are aware of only a small part of the layers. It’s even not possible to draw a solid boundary between awareness and no awareness.
The second question, a stable object may have high mutual information at different times, but may also have high mutual information with other agents. For example, a rock may be stable in size and shape, but its position and movement may highly depends on outside natural force and human behavior. However, the definition of agency is more complex than this, I will try to discuss it in the future posts.
Thank you for your introduction of Richard Jeffery’s theory! I just read some article about his system and I think it’s great. I think his utility theory built upon proposition is just what I want to describe. However, his theory still starts from given preferences without showing how we can get these preferences (although these preferences should satisfy certain conditions), and my article argues that these preferences cannot be estimated using the Monte Carlo method.
Actually, ACI is an approach that can assign utility (preferences) to every proposition, by estimate its probability of “being the same as example of right things”. In other words, as long as we have examples of doing the right things, we can estimate the utility of any proposition using algorithmic information theory. And that’s actually how organisms learn from evolutionary history.
I temporarily call this approach Algorithmic Common Intelligence (ACI) because its mechanism is similar to the common law system. I am still refining this system from reading more other theories and writing programs based on it, that’s why I think my old articles about ACI may contain many errors.
Again, thank you for your comment! Hope you can give me more advices.