Could the “correct” bridge hypothesis change if part of the agent is destroyed, or, if not, would it require a more complex bridge hypothesis (that is never verified in practice)?
For an agent that can die or become fully unconscious, a complete and accurate bridge hypothesis should include conditions under which a physical state of the world corresponds to the absence of any introspection or data. I’ll talk about a problem along these lines for AIXI in my next post.
It’s similar to a physical hypothesis. You might update the hypothesis when you learn something new about death, but you of course can’t update after dying, so any correct physical or mental or bridging belief about death will have to be prospective.
Is it supposed to be possible to define a single “correct” bridge mapping for some other agent than self?
I’m not sure about the ‘single correct’ part, but yes, you can have hypotheses about the link between an experience in another agent and the physical world. In some cases it may be hard to decide whether you’re hypothesizing about a different agent’s phenomenology, or about the phenomenology of a future self.
You can also hypothesize about the link between unconscious computational states and physical states, in yourself or others. For instance, in humans we seem to be able to have beliefs even when we aren’t experiencing having them. So a fully general hypothesis linking human belief to physics wouldn’t be a ‘phenomenological bridge hypothesis’. But it might still be a ‘computational bridge hypothesis’ or a ‘functional bridge hypothesis’.
Is the location of the agent in a world a part of the bridge hypothesis or a given?
I’ll talk about this a few posts down the line. Indexical knowledge (including anthropics) doesn’t seem to be a solved problem yet.
-
For an agent that can die or become fully unconscious, a complete and accurate bridge hypothesis should include conditions under which a physical state of the world corresponds to the absence of any introspection or data. I’ll talk about a problem along these lines for AIXI in my next post.
It’s similar to a physical hypothesis. You might update the hypothesis when you learn something new about death, but you of course can’t update after dying, so any correct physical or mental or bridging belief about death will have to be prospective.
I’m not sure about the ‘single correct’ part, but yes, you can have hypotheses about the link between an experience in another agent and the physical world. In some cases it may be hard to decide whether you’re hypothesizing about a different agent’s phenomenology, or about the phenomenology of a future self.
You can also hypothesize about the link between unconscious computational states and physical states, in yourself or others. For instance, in humans we seem to be able to have beliefs even when we aren’t experiencing having them. So a fully general hypothesis linking human belief to physics wouldn’t be a ‘phenomenological bridge hypothesis’. But it might still be a ‘computational bridge hypothesis’ or a ‘functional bridge hypothesis’.
I’ll talk about this a few posts down the line. Indexical knowledge (including anthropics) doesn’t seem to be a solved problem yet.
-