Having just stumbled across LW yesterday, I’ve been gorging myself on rationality and discovering that I have a lot of cruft in my thought process, but I have to disagree with you on this.
“Meaning” and “mysterious” don’t apply to reality, they only apply to maps of the terrain reality. Self-awareness itself is what allows a pattern/agent/model to preserve itself in the face of entropy and competitors, making it “meaningful” to an observer of the agent/model that is trying to understand how it will operate. Being self-aware of the self-awareness (i.e. mapping the map, or recursively refining the super-model to understand itself better) can also impact our ability to preserve ourselves, making it “meaningful” to the agent/model itself. Being aware of others self-awareness (i.e. mapping a different agent/map and realizing that it will act to preserve itself) is probably one of the most critical developments in the evolution of humans.
“I am” a super-agent. It is a stack of component agents.
At each layer, a shared belief by a system of agents (that each agent is working towards the common utility of all the agents) results in a super-agent with more complex goals that does not have a belief that it is composed of distinct sub-agents. Like the 7-layer network model or the transistor-gate-chip-computer model, each layer is just an emergent property of its components. But each layer has meaning because it provides us a predictive model to understand the system’s behavior, in a way that we don’t understand by just looking at a complex version of the layer below it.
My super-agent has a super-model of reality, similarly composed. Some parts of that super-model are tagged, weakly or strongly, with an attribute. The collection of cells that makes up a fatty lump on my head is weakly marked with that attribute. The parts of reality where my super-agent/-model exist are very strongly tagged.
My super-agent survives because it has marked the area on its model corresponding to where it exists, and it has a goal of continually remarking this area. If it has an accurate model, but marks a different region of reality (or marks the correct region but doesn’t protect it), it will eventually be destroyed by entropy. If it has an inaccurate model, it won’t be able to effectively interact with reality to protect the region where it resides. If it has an accurate model, and marks only where it originally is, it won’t be able to adapt to face environmental changes and challenges while still maintaining its reality.
Having just stumbled across LW yesterday, I’ve been gorging myself on rationality and discovering that I have a lot of cruft in my thought process, but I have to disagree with you on this.
“Meaning” and “mysterious” don’t apply to reality, they only apply to maps of the terrain reality. Self-awareness itself is what allows a pattern/agent/model to preserve itself in the face of entropy and competitors, making it “meaningful” to an observer of the agent/model that is trying to understand how it will operate. Being self-aware of the self-awareness (i.e. mapping the map, or recursively refining the super-model to understand itself better) can also impact our ability to preserve ourselves, making it “meaningful” to the agent/model itself. Being aware of others self-awareness (i.e. mapping a different agent/map and realizing that it will act to preserve itself) is probably one of the most critical developments in the evolution of humans. “I am” a super-agent. It is a stack of component agents.
At each layer, a shared belief by a system of agents (that each agent is working towards the common utility of all the agents) results in a super-agent with more complex goals that does not have a belief that it is composed of distinct sub-agents. Like the 7-layer network model or the transistor-gate-chip-computer model, each layer is just an emergent property of its components. But each layer has meaning because it provides us a predictive model to understand the system’s behavior, in a way that we don’t understand by just looking at a complex version of the layer below it. My super-agent has a super-model of reality, similarly composed. Some parts of that super-model are tagged, weakly or strongly, with an attribute. The collection of cells that makes up a fatty lump on my head is weakly marked with that attribute. The parts of reality where my super-agent/-model exist are very strongly tagged. My super-agent survives because it has marked the area on its model corresponding to where it exists, and it has a goal of continually remarking this area. If it has an accurate model, but marks a different region of reality (or marks the correct region but doesn’t protect it), it will eventually be destroyed by entropy. If it has an inaccurate model, it won’t be able to effectively interact with reality to protect the region where it resides. If it has an accurate model, and marks only where it originally is, it won’t be able to adapt to face environmental changes and challenges while still maintaining its reality.