Wouldn’t an AI following that procedure be really easy to spot? (Because it’s not deceptive, and it just starts trying to destroy things it can’t predict as it encounters them.)
Wouldn’t an AI following that procedure be really easy to spot? (Because it’s not deceptive, and it just starts trying to destroy things it can’t predict as it encounters them.)