I’ve had an online conversation where it was argued that AI goals other than what was intended by the programmers would be evidence of a faulty AI—and hence that it wouldn’t be a dangerous one. This post was a direct response to that.
Ah, I see. Fair enough, I agree.
I’ve had an online conversation where it was argued that AI goals other than what was intended by the programmers would be evidence of a faulty AI—and hence that it wouldn’t be a dangerous one. This post was a direct response to that.
Ah, I see. Fair enough, I agree.