Unity of will occurs when deixis is eliminated; that is, when speaker-dependent variables are eliminated from cognition. If a human simultaneously suppresses her adversarial attitude, and also suppresses her expectations that the AI will make observer-biased decisions, the result is unity of will. Thinking in the third person is natural to AIs and very hard for humans; thus, the task for a Friendship programmer is to suppress her belief that the AI will think about verself in the first person (and, to a lesser extent, think about herself in the third person).
Actually, thinking in the third person is unnatural to humans and computers. It’s just that writing logic programs in the third person is natural to programmers. Many difficult representational problems, however, become much simpler when you use deictic representations. There’s an overview of this literature in the book Deixis in Narrative: A cognitive science perspective (Duchan et al. 1995). For a shorter introduction, see A logic of arbitrary and indefinite objects.
Part of the problem is that 3rd person representations have extensional semantics. If Mary Doe represents her knowledge about herself internally as a set of propositions about Mary Doe, and then meets someone else named Mary Doe, or marries John Deer and changes her name, confusion results.
A more severe problem becomes apparent when you represent beliefs about beliefs. If you ask, “What would agent X do in this situation?”, and you represent agent X’s beliefs using a 3rd-person representation, you have a lot of trouble keeping straight what you know about who is who, and what agent X knows about who is who. If you just put a tag on something at call it Herbert, you don’t know if that means that you think the entity represented is the same entity named Herbert somewhere else, or that agent X thinks that (or thought that).
An even more severe problem becomes apparent when you try to build robots. Agre & Chapman’s paper on Pengi is a key paper in the behavior-based robotics movement of the 1990s. If you want to use 3rd-person representations, you need to give your robot a whole lot of knowledge and do a whole lot of calculation just to get it to, say, dodge a rock aimed at its head. Using deictic representations makes it much simpler.
We could perhaps summarize the problem by saying that, when you use a 3rd-person representation, every time you use that representation you need to invoke or at least trust a vast and complicated system for establishing a link between the functional thing being represented, and an identity in some giant lookup table of names. Whereas often you don’t care about all that, and it’s nothing but an opportunity to introduce error into the system.
This is from the friendly AI document:
Actually, thinking in the third person is unnatural to humans and computers. It’s just that writing logic programs in the third person is natural to programmers. Many difficult representational problems, however, become much simpler when you use deictic representations. There’s an overview of this literature in the book Deixis in Narrative: A cognitive science perspective (Duchan et al. 1995). For a shorter introduction, see A logic of arbitrary and indefinite objects.
Actually this may be a better link.
Part of the problem is that 3rd person representations have extensional semantics. If Mary Doe represents her knowledge about herself internally as a set of propositions about Mary Doe, and then meets someone else named Mary Doe, or marries John Deer and changes her name, confusion results.
A more severe problem becomes apparent when you represent beliefs about beliefs. If you ask, “What would agent X do in this situation?”, and you represent agent X’s beliefs using a 3rd-person representation, you have a lot of trouble keeping straight what you know about who is who, and what agent X knows about who is who. If you just put a tag on something at call it Herbert, you don’t know if that means that you think the entity represented is the same entity named Herbert somewhere else, or that agent X thinks that (or thought that).
An even more severe problem becomes apparent when you try to build robots. Agre & Chapman’s paper on Pengi is a key paper in the behavior-based robotics movement of the 1990s. If you want to use 3rd-person representations, you need to give your robot a whole lot of knowledge and do a whole lot of calculation just to get it to, say, dodge a rock aimed at its head. Using deictic representations makes it much simpler.
We could perhaps summarize the problem by saying that, when you use a 3rd-person representation, every time you use that representation you need to invoke or at least trust a vast and complicated system for establishing a link between the functional thing being represented, and an identity in some giant lookup table of names. Whereas often you don’t care about all that, and it’s nothing but an opportunity to introduce error into the system.