But what if p represents an indexical uncertainty, which is uncertainty about where (or when) you are in the world?
Didn’t someone pose this exact question here a few months ago?
If you construct your world states A, B, and C using an indexical representation, there is no uncertainty about where, who, or when you are in that representation. Representations without indexicals turn out to have major problems in artificial intelligence (although they are very popular; mainly, I think, due to the fact that it doesn’t seem to be possible for a single knowledge-representation-and-reasoning system to focus both on getting the representation right, and on getting the implementation right).
The brain uses indexical spatial representations most, although not all, of the time (by which I mean that in many different cases it represents coordinates as relative to a part of the agent, so, e.g., turning your head to the left will cause some representations in your brain to move everything to the right.) This turns out, at least for me, to also be the easiest representation to use to control autonomous agents.
I think that you can’t have indexical uncertainty left over if you’re constructing complete world state representations. I mean, even if you’re not using an indexical representation, if you give me world state A and say, “but I’m not sure where I am in A”, I’m going to hand A back, give you an incomplete, and tell you to give it to me again when it’s finished.
Didn’t someone pose this exact question here a few months ago?
If you construct your world states A, B, and C using an indexical representation, there is no uncertainty about where, who, or when you are in that representation. Representations without indexicals turn out to have major problems in artificial intelligence (although they are very popular; mainly, I think, due to the fact that it doesn’t seem to be possible for a single knowledge-representation-and-reasoning system to focus both on getting the representation right, and on getting the implementation right).
The brain uses indexical spatial representations most, although not all, of the time (by which I mean that in many different cases it represents coordinates as relative to a part of the agent, so, e.g., turning your head to the left will cause some representations in your brain to move everything to the right.) This turns out, at least for me, to also be the easiest representation to use to control autonomous agents.
I think that you can’t have indexical uncertainty left over if you’re constructing complete world state representations. I mean, even if you’re not using an indexical representation, if you give me world state A and say, “but I’m not sure where I am in A”, I’m going to hand A back, give you an incomplete, and tell you to give it to me again when it’s finished.