We should remember that we aren’t talking about true Friendly AI here, but AI in charge of lesser tasks such as, in the example, running a factory. There will be many things the AI doesn’t know because it doesn’t need to, including how to defend itself against being shut down (I see no logical reason why that would be necessary for running a paperclip factory). Combining that with the limits on intelligence necessary for such lesser tasks, and failure modes become far less likely.
Would a human be bound to “at least reasonable judgement” if given super intelligent ability?
We should remember that we aren’t talking about true Friendly AI here, but AI in charge of lesser tasks such as, in the example, running a factory. There will be many things the AI doesn’t know because it doesn’t need to, including how to defend itself against being shut down (I see no logical reason why that would be necessary for running a paperclip factory). Combining that with the limits on intelligence necessary for such lesser tasks, and failure modes become far less likely.