Ah, I see. I misunderstood your definition of “paperclip maximiser”; I assumed paperclip maximiser and Unfriendly AI were equivalent. Sorry.
Next question: if maximising paperclips or relentless self-optimisation is a dead-end goal, what is an example of a non-dead-end goal? Is there a clear border between the two, which will be obvious to the AI?
To my mind, if paperclip maximisation is a dead end, then so is everything else. The Second Law of Thermodynamics will catch up with you eventually. Nothing you create will endure forever. The only thing you can do is try to maximise your utility for as long as possible, and if that means paperclips, then so be it.
Ah, I see. I misunderstood your definition of “paperclip maximiser”; I assumed paperclip maximiser and Unfriendly AI were equivalent. Sorry.
Next question: if maximising paperclips or relentless self-optimisation is a dead-end goal, what is an example of a non-dead-end goal? Is there a clear border between the two, which will be obvious to the AI?
To my mind, if paperclip maximisation is a dead end, then so is everything else. The Second Law of Thermodynamics will catch up with you eventually. Nothing you create will endure forever. The only thing you can do is try to maximise your utility for as long as possible, and if that means paperclips, then so be it.