We often assume that an AI will have an identity and goals of its own. That it will be some separate entity from a human being or group of humans.
In physics there are no separate entities, merely a function evolving through time. So any identity needs to be constructed by systems within physics, and the boundaries are arbitrary. We have been built by evolution and all the cells in our body have the same programming so we have a handy rule of thumb that our body is “us” as it is created by a single replicating complex. So we assume that a computational entity, if it develops a theory of self, will only include its processing elements or code and nothing else in its notion of identity. But what an system identifies with can be controlled and specifed.
If a system identifies a human as an important part of itself it will strive to protect it and its normal functioning, as we instinctively protect important parts of ourselves such as the head and genitals.
So what possible objections to this are there?
1) Humans are spatially separate from the machine so they won’t consider it part of themselves
We have a habit of identifying with groups larger than our own, such as countries and integrating our goals with theirs to different extents. Spatial co-location is not required.
2) Humans are very different from computers they will see them as “other”
Different parts of the human body are very diverse, but all of it is seen as a singular entity. Spleen and all.
3) A human will do things the computer doesn’t know why, so It will not see it as part of itself.
Self-knowledge is not required for self-identification. Different parts of the brain are black boxes to others, we make up explanations for why we do things, in cases like blind sight, so there is no need for all the parts of the system to be self-reflective.
So can we make advanced computational systems that consider humanity as part of them? One possible problem with this approach is that if it doesn’t get information from a part of humanity, it may well ignore its needs even if it still considers it part of itself. So perhaps it could bond with a single human being and have a myriad of the systems and rely on negotiations between them to [form a balance](http://lesswrong.com/r/discussion/lw/a7y/friendly_ai_society/).
One thing to consider is that some people only consider the program encoded by their neural patterns to be them. Which is somewhat odd, why put the boundary there? The whole body is a computer, the whole world is according to physicalism. There is no particular reason for neuro chauvinism I can see, apart from perhaps our shared culture that emphasises the importance of what the brain does. But it highlights a danger, humans should be integrated with the system in a way that should seem important, rather than something that can be discarded like hair. Evolutionary pressure may eventually make them see the human portion as unimportant, unless some form of agreement is made to limit replication without the human component. Also 7 billion humans takes a small amount of resources to maintain in a big picture view of the universe.
This way of thinking suggests a concrete research path. Develop a theory of identity by analysing what sort of interactions makes a human feel that a it is an important part of them. Also look at the part of the brain that deals with identity and see how it malfunctions.
*This is just a place holder article, I’ll try and dig up references and flesh out previous philosophical positions later on*
Making computer systems with extended Identity
We often assume that an AI will have an identity and goals of its own. That it will be some separate entity from a human being or group of humans.
In physics there are no separate entities, merely a function evolving through time. So any identity needs to be constructed by systems within physics, and the boundaries are arbitrary. We have been built by evolution and all the cells in our body have the same programming so we have a handy rule of thumb that our body is “us” as it is created by a single replicating complex. So we assume that a computational entity, if it develops a theory of self, will only include its processing elements or code and nothing else in its notion of identity. But what an system identifies with can be controlled and specifed.
If a system identifies a human as an important part of itself it will strive to protect it and its normal functioning, as we instinctively protect important parts of ourselves such as the head and genitals.
So what possible objections to this are there?
1) Humans are spatially separate from the machine so they won’t consider it part of themselves
We have a habit of identifying with groups larger than our own, such as countries and integrating our goals with theirs to different extents. Spatial co-location is not required.
2) Humans are very different from computers they will see them as “other”
Different parts of the human body are very diverse, but all of it is seen as a singular entity. Spleen and all.
3) A human will do things the computer doesn’t know why, so It will not see it as part of itself.
Self-knowledge is not required for self-identification. Different parts of the brain are black boxes to others, we make up explanations for why we do things, in cases like blind sight, so there is no need for all the parts of the system to be self-reflective.
So can we make advanced computational systems that consider humanity as part of them? One possible problem with this approach is that if it doesn’t get information from a part of humanity, it may well ignore its needs even if it still considers it part of itself. So perhaps it could bond with a single human being and have a myriad of the systems and rely on negotiations between them to [form a balance](http://lesswrong.com/r/discussion/lw/a7y/friendly_ai_society/).
One thing to consider is that some people only consider the program encoded by their neural patterns to be them. Which is somewhat odd, why put the boundary there? The whole body is a computer, the whole world is according to physicalism. There is no particular reason for neuro chauvinism I can see, apart from perhaps our shared culture that emphasises the importance of what the brain does. But it highlights a danger, humans should be integrated with the system in a way that should seem important, rather than something that can be discarded like hair. Evolutionary pressure may eventually make them see the human portion as unimportant, unless some form of agreement is made to limit replication without the human component. Also 7 billion humans takes a small amount of resources to maintain in a big picture view of the universe.
This way of thinking suggests a concrete research path. Develop a theory of identity by analysing what sort of interactions makes a human feel that a it is an important part of them. Also look at the part of the brain that deals with identity and see how it malfunctions.
*This is just a place holder article, I’ll try and dig up references and flesh out previous philosophical positions later on*