An extensional grounding is one in which things in the representation map 1-1 to things in the world. This doesn’t work for intelligent agents. They often have inaccurate information about the world, and don’t know about the existence of objects, or don’t know that 2 things are really the same object, or that 2 things they think are different are really the same object, or can’t distinguish between objects (such as go stones). They also can reason about hypothetical entities.
In an intensional grounding, you can’t have separate representations for different extensional things (like go stones) that you think about in exactly the same way. Some people claim that they are distinguished only contextually, so that you can have a representation of “the stone at G7”, or “the leftmost stone”, but they all collapse into a single mental object when you put them back in the bag.
An extensional grounding is one in which things in the representation map 1-1 to things in the world. This doesn’t work for intelligent agents. They often have inaccurate information about the world, and don’t know about the existence of objects, or don’t know that 2 things are really the same object, or that 2 things they think are different are really the same object, or can’t distinguish between objects (such as go stones). They also can reason about hypothetical entities.
In an intensional grounding, you can’t have separate representations for different extensional things (like go stones) that you think about in exactly the same way. Some people claim that they are distinguished only contextually, so that you can have a representation of “the stone at G7”, or “the leftmost stone”, but they all collapse into a single mental object when you put them back in the bag.