, besides pointing out that its current actions are suboptimal for its goal in the long run?
that sounds like a good rational argument to me. Is the paperclip maximiser supposed to
have a different rationality or just different values?
Another way to put it is that the rationality of killing sprees is dependent on the agent’s values. I haven’t read much of this site, but I’m getting the impression that a major project is to accept this...and figure out which initial values to give AI. Simply ensuring the AI will be rational is not enough to protect our values.
Like so much material on this site, that tacitly assumes values cannot be reasoned about.
that sounds like a good rational argument to me. Is the paperclip maximiser supposed to have a different rationality or just different values?
Like so much material on this site, that tacitly assumes values cannot be reasoned about.