It is a developmental problem, of how to prevent AI from making this specific mistake that seems to be in the way. This ethical injunction is about what kind of thoughts need to be avoided, not just about surprisingly bad consequences of actions on external environment. If AI were developed to disproportionally focus on understanding environment more than on understanding its own mind, this will be a kind of disaster to expect. At the same time, AI needs to understand the environment sufficiently to understand the injunction, before becoming able to apply the injunction to its own mind. Calls for a careful balance, maybe for developing content-specific mechanisms by programmers.
People are uniquely situated to think about this problem, since we are unable to make the mistake due to our limited capability, and we are not a part of such mistake. Any construction of limited cognitive capability that AI could make to solve this problem without making the mistake runs a risk of itself being an embodiment of the mistake. If nonperson predicate is a true part of AI, both form of thought and an object, AI has a way to proceed.
It is a developmental problem, of how to prevent AI from making this specific mistake that seems to be in the way. This ethical injunction is about what kind of thoughts need to be avoided, not just about surprisingly bad consequences of actions on external environment. If AI were developed to disproportionally focus on understanding environment more than on understanding its own mind, this will be a kind of disaster to expect. At the same time, AI needs to understand the environment sufficiently to understand the injunction, before becoming able to apply the injunction to its own mind. Calls for a careful balance, maybe for developing content-specific mechanisms by programmers.
People are uniquely situated to think about this problem, since we are unable to make the mistake due to our limited capability, and we are not a part of such mistake. Any construction of limited cognitive capability that AI could make to solve this problem without making the mistake runs a risk of itself being an embodiment of the mistake. If nonperson predicate is a true part of AI, both form of thought and an object, AI has a way to proceed.