Part of the problem is that 3rd person representations have extensional semantics. If Mary Doe represents her knowledge about herself internally as a set of propositions about Mary Doe, and then meets someone else named Mary Doe, or marries John Deer and changes her name, confusion results.
A more severe problem becomes apparent when you represent beliefs about beliefs. If you ask, “What would agent X do in this situation?”, and you represent agent X’s beliefs using a 3rd-person representation, you have a lot of trouble keeping straight what you know about who is who, and what agent X knows about who is who. If you just put a tag on something at call it Herbert, you don’t know if that means that you think the entity represented is the same entity named Herbert somewhere else, or that agent X thinks that (or thought that).
An even more severe problem becomes apparent when you try to build robots. Agre & Chapman’s paper on Pengi is a key paper in the behavior-based robotics movement of the 1990s. If you want to use 3rd-person representations, you need to give your robot a whole lot of knowledge and do a whole lot of calculation just to get it to, say, dodge a rock aimed at its head. Using deictic representations makes it much simpler.
We could perhaps summarize the problem by saying that, when you use a 3rd-person representation, every time you use that representation you need to invoke or at least trust a vast and complicated system for establishing a link between the functional thing being represented, and an identity in some giant lookup table of names. Whereas often you don’t care about all that, and it’s nothing but an opportunity to introduce error into the system.
Actually this may be a better link.
Part of the problem is that 3rd person representations have extensional semantics. If Mary Doe represents her knowledge about herself internally as a set of propositions about Mary Doe, and then meets someone else named Mary Doe, or marries John Deer and changes her name, confusion results.
A more severe problem becomes apparent when you represent beliefs about beliefs. If you ask, “What would agent X do in this situation?”, and you represent agent X’s beliefs using a 3rd-person representation, you have a lot of trouble keeping straight what you know about who is who, and what agent X knows about who is who. If you just put a tag on something at call it Herbert, you don’t know if that means that you think the entity represented is the same entity named Herbert somewhere else, or that agent X thinks that (or thought that).
An even more severe problem becomes apparent when you try to build robots. Agre & Chapman’s paper on Pengi is a key paper in the behavior-based robotics movement of the 1990s. If you want to use 3rd-person representations, you need to give your robot a whole lot of knowledge and do a whole lot of calculation just to get it to, say, dodge a rock aimed at its head. Using deictic representations makes it much simpler.
We could perhaps summarize the problem by saying that, when you use a 3rd-person representation, every time you use that representation you need to invoke or at least trust a vast and complicated system for establishing a link between the functional thing being represented, and an identity in some giant lookup table of names. Whereas often you don’t care about all that, and it’s nothing but an opportunity to introduce error into the system.