antagonism in humans is responsible for a large portion of the harm humans cause, even if it can on occasion be consequentially ‘good’ within that cursed context. implementing a mimicry of human antagonism as a fundamental trait of an AI seems like an s-risk which will trigger whenever such an AI is powerful.
antagonism in humans is responsible for a large portion of the harm humans cause, even if it can on occasion be consequentially ‘good’ within that cursed context. implementing a mimicry of human antagonism as a fundamental trait of an AI seems like an s-risk which will trigger whenever such an AI is powerful.