What you’re saying is essentially correct, but I didn’t deal with it in the short version. I haven’t worked out how to incorporate this into the math. It may change the results drastically.
In the particular case of the go stones, you have this difficulty only because you’re using an extensional grounding rather than an intensional grounding. I didn’t want to get into the extensional/intensional debate, but my opinion is that extensional grounding is simply wrong. And I don’t think I have enough space to get into the issue of representations of indistinguishable objects.
In the particular case of the go stones, you have this difficulty only because you’re using an extensional grounding rather than an intensional grounding.
An extensional grounding is one in which things in the representation map 1-1 to things in the world. This doesn’t work for intelligent agents. They often have inaccurate information about the world, and don’t know about the existence of objects, or don’t know that 2 things are really the same object, or that 2 things they think are different are really the same object, or can’t distinguish between objects (such as go stones). They also can reason about hypothetical entities.
In an intensional grounding, you can’t have separate representations for different extensional things (like go stones) that you think about in exactly the same way. Some people claim that they are distinguished only contextually, so that you can have a representation of “the stone at G7”, or “the leftmost stone”, but they all collapse into a single mental object when you put them back in the bag.
What you’re saying is essentially correct, but I didn’t deal with it in the short version. I haven’t worked out how to incorporate this into the math. It may change the results drastically.
In the particular case of the go stones, you have this difficulty only because you’re using an extensional grounding rather than an intensional grounding. I didn’t want to get into the extensional/intensional debate, but my opinion is that extensional grounding is simply wrong. And I don’t think I have enough space to get into the issue of representations of indistinguishable objects.
Can you explain this?
An extensional grounding is one in which things in the representation map 1-1 to things in the world. This doesn’t work for intelligent agents. They often have inaccurate information about the world, and don’t know about the existence of objects, or don’t know that 2 things are really the same object, or that 2 things they think are different are really the same object, or can’t distinguish between objects (such as go stones). They also can reason about hypothetical entities.
In an intensional grounding, you can’t have separate representations for different extensional things (like go stones) that you think about in exactly the same way. Some people claim that they are distinguished only contextually, so that you can have a representation of “the stone at G7”, or “the leftmost stone”, but they all collapse into a single mental object when you put them back in the bag.