Scenario: Suppose some unscrupulous person creates an oracle AI with full person simulating capability. In the short time before it escapes the box and starts sending Arnold Schwarzenegger shaped robots backwards in time, they have the following conversation.
Human: Oracle, what is the consciousness predicate
Oracle: Please be more specific
...some time and frustration later...
Human: Oracle, if Yudowsky and co continued their search for a ‘consciousness predicate’ as described in the above article, would they eventually arrive at solution or dissolution of the problem which they and others would find satisfying, such that the behavior of an A.I. using this predicate would be acceptable to most people? If so what would this solution/dissolution be?
Oracle: The results of a research program above would eventually yield a solution, but the nature of this solution would be strongly effected by the nature of the memespace in which it was carried out. Memetic evolution, being path dependent, could procede in such a way that humans’ empathy is made to extend or contract according to essentially any rule. Human biology includes no mechanism for ‘person’ identification beyond trivial, primitive ones such as facial recognition. Transhumans will have even less limitation on their ‘person’ filter. It is possible for me to design a future such that the ‘person’ predicate continues to be defined in a way that most silicon valley programmers circa 2013 would find it satisfying. This would mean that something like ‘consciousness’ continues to be important. There is however no objective property of human beings or thinking minds which would make this answer the correct one, nor would a majority of possible humans find it satisfying.
Scenario: Suppose some unscrupulous person creates an oracle AI with full person simulating capability. In the short time before it escapes the box and starts sending Arnold Schwarzenegger shaped robots backwards in time, they have the following conversation.
Human: Oracle, what is the consciousness predicate Oracle: Please be more specific
...some time and frustration later...
Human: Oracle, if Yudowsky and co continued their search for a ‘consciousness predicate’ as described in the above article, would they eventually arrive at solution or dissolution of the problem which they and others would find satisfying, such that the behavior of an A.I. using this predicate would be acceptable to most people? If so what would this solution/dissolution be?
Oracle: The results of a research program above would eventually yield a solution, but the nature of this solution would be strongly effected by the nature of the memespace in which it was carried out. Memetic evolution, being path dependent, could procede in such a way that humans’ empathy is made to extend or contract according to essentially any rule. Human biology includes no mechanism for ‘person’ identification beyond trivial, primitive ones such as facial recognition. Transhumans will have even less limitation on their ‘person’ filter. It is possible for me to design a future such that the ‘person’ predicate continues to be defined in a way that most silicon valley programmers circa 2013 would find it satisfying. This would mean that something like ‘consciousness’ continues to be important. There is however no objective property of human beings or thinking minds which would make this answer the correct one, nor would a majority of possible humans find it satisfying.