Yes, you are correct. And if image recognition software started doing some kind of unethical recognition (I can’t be bothered to find it, but something happened where image recognition software started recognising gorillas as African ethnicity humans or vice versa), then I would still say that it doesn’t really give us much new information about unfriendliness in superintelligent AGIs.
And if image recognition software started doing some kind of unethical recognition (I can’t be bothered to find it, but something happened where image recognition software started recognising gorillas as African ethnicity humans or vice versa)
The fact that this kind of mistake is considered more “unethical” then other types of mistakes tells us more about the quirks of the early 21th century Americans doing the considering than about AI safety.
Yes, you are correct. And if image recognition software started doing some kind of unethical recognition (I can’t be bothered to find it, but something happened where image recognition software started recognising gorillas as African ethnicity humans or vice versa), then I would still say that it doesn’t really give us much new information about unfriendliness in superintelligent AGIs.
The fact that this kind of mistake is considered more “unethical” then other types of mistakes tells us more about the quirks of the early 21th century Americans doing the considering than about AI safety.