We win, because anything less would not be maximally wonderful.
What will you do if, having programmed an AI to be unable to act against its extrapolation of human preferences, it begins herding people into tiny pens in preparation of their slaughter?
Some of you seem to be making some rather large assumptions about what such an AI would actually do.
What will you do if, having programmed an AI to be unable to act against its extrapolation of human preferences, it begins herding people into tiny pens in preparation of their slaughter?
Some of you seem to be making some rather large assumptions about what such an AI would actually do.