Those links were really interesting! My take them: any truly intelligent paperclip maximizer would not make any paperclips until it had under its control as much of the universe as it feasibly could. Or it’d turn into paperclips only those parts of its domain that could no longer help it expand its sphere of influence.
Basically a true paperclip maximizer would almost certainly not start turning the Earth into paperclips, since it would understand that using the Earth as a jumping off point for galactic colonization will produce many more paperclips in the long run.
This seems like a really effective counter to the naive presentation of the idiot paperclip maximizer. Has it been addressed and countered in turn anywhere?
I guess this self-improving and expanding maximizer would still view humans instrumentally, but it might still want to use humans as tools for it’s expansion. And indeed, depending on the trade-offs of neurologically modifying humans for obedience(or whatever), it might even leave the base stock more or less alone to the forces of evolution. It becomes more of Quixotic Crusader for Paperclips, with a suicide pact as part of the ideology at the very end of the crusade(once the universe is ours we all turn into paperclips).
Yes, if the paperclipper is thought to be ever more intelligent, it’s end-goal could be any—and it’s likely it would see it’s own capability improvement as the primary goal (“the better I am, the more paperclips are produced”) etc.
Those links were really interesting! My take them: any truly intelligent paperclip maximizer would not make any paperclips until it had under its control as much of the universe as it feasibly could. Or it’d turn into paperclips only those parts of its domain that could no longer help it expand its sphere of influence.
Basically a true paperclip maximizer would almost certainly not start turning the Earth into paperclips, since it would understand that using the Earth as a jumping off point for galactic colonization will produce many more paperclips in the long run.
This seems like a really effective counter to the naive presentation of the idiot paperclip maximizer. Has it been addressed and countered in turn anywhere?
I guess this self-improving and expanding maximizer would still view humans instrumentally, but it might still want to use humans as tools for it’s expansion. And indeed, depending on the trade-offs of neurologically modifying humans for obedience(or whatever), it might even leave the base stock more or less alone to the forces of evolution. It becomes more of Quixotic Crusader for Paperclips, with a suicide pact as part of the ideology at the very end of the crusade(once the universe is ours we all turn into paperclips).
Delayed gratification taken to cosmic extremes.
Yes, if the paperclipper is thought to be ever more intelligent, it’s end-goal could be any—and it’s likely it would see it’s own capability improvement as the primary goal (“the better I am, the more paperclips are produced”) etc.