Unknown,
maybe we don’t need to give the AI some special ethical programming, but we will surely need to give it basic ethical assumptions (or axioms, data, whatever you call that) if we want it to make ethical conclusions. And the AI will process the information given these assumptions and return answers according to these assumptions—or maybe collapse when the assumptions are self-contradictory—but I can’t imagine how could the AI given “murder is wrong” as an axiom reach the conclusion “murder is OK” or vice versa.
Regarding Roko’s suggestion that the AI should contain information about what people think and conclude whose opinion is correct—the easiest way to do this is to count each opinion and pronounce the majority’s view correct. This is of course not much intelligent, so you can compare the different opinions, make some consistency checks, perhaps modify the analysing procedure itself during the run (I believe the will be no strict boundary between the “data” and “code” in the AI), but still the result is determined by the input. If people can create an AI which says “murder is wrong”, they can surely create also an AI which tells the contrary, and the latter would be no less intelligent than the former.
Unknown, maybe we don’t need to give the AI some special ethical programming, but we will surely need to give it basic ethical assumptions (or axioms, data, whatever you call that) if we want it to make ethical conclusions. And the AI will process the information given these assumptions and return answers according to these assumptions—or maybe collapse when the assumptions are self-contradictory—but I can’t imagine how could the AI given “murder is wrong” as an axiom reach the conclusion “murder is OK” or vice versa.
Regarding Roko’s suggestion that the AI should contain information about what people think and conclude whose opinion is correct—the easiest way to do this is to count each opinion and pronounce the majority’s view correct. This is of course not much intelligent, so you can compare the different opinions, make some consistency checks, perhaps modify the analysing procedure itself during the run (I believe the will be no strict boundary between the “data” and “code” in the AI), but still the result is determined by the input. If people can create an AI which says “murder is wrong”, they can surely create also an AI which tells the contrary, and the latter would be no less intelligent than the former.