I agree; the OP is anthropomorphic; in fact, there is no reason to assume that an AGI paperclip maximizer would think like we do. In fact, in Superintelligence, Bostrom avoids any assumption that an AGI would have subjective conscious experiences. An unconscious AGI paperclip maximizer would presumably not be troubled by the fact that a paperclip is just an ill-defined configuration of matter, or by anything else, for that matter.
I imagine that it’s a good illustration of what a humanlike uploaded intelligence that’s had it’s goals/values scooped out and replaced with valuing paperclips might look like.
Indeed, and such an anthropomorphic optimizer would soon cease to be a paperclip optimizer at all if it could realize the “pointlessness” of its task and re-evaluate its goals.
Well, humans have existentialism despite no utility of it. It just seems like a glitch that you end up having, when your conciousness/intelligence achieves certain level (my reasoning is thus: high intelligence needs analysing many “points of view”, many counterfactuals. Technicaly, they end up internalized to some point.) Human trying to excel his GI, which is a process allowing him to reproduce better, ends up wondering the meaning of life. It could in turn drastically decrease his willingness to reproduce, but it is overridden by imperatives. In the same way, I belive AGI would have subjective conscious experiences—as a form of glitch of general intelligence.
Well, glitch or not, I’m glad to have it; I would not want to be an unconscious automaton! As Socrates said, “The life which is unexamined is not worth living.”
However, it remains to be seen whether consciousness is an automatic by-product of general intelligence. It could be the case that consciousness is an evolved trait of organic creatures with an implicit, inexact utility function. Perhaps a creature with an evolved sense of self and a desire for that self to continue to exist is more likely to produce offspring than one with no such sense of self. If this is the reason that we are conscious, then there is no reason to believe that an AGI will be conscious.
It reads to me like a human paperclip maximizer trying to apply lesswrong’s ideas.
I agree; the OP is anthropomorphic; in fact, there is no reason to assume that an AGI paperclip maximizer would think like we do. In fact, in Superintelligence, Bostrom avoids any assumption that an AGI would have subjective conscious experiences. An unconscious AGI paperclip maximizer would presumably not be troubled by the fact that a paperclip is just an ill-defined configuration of matter, or by anything else, for that matter.
I imagine that it’s a good illustration of what a humanlike uploaded intelligence that’s had it’s goals/values scooped out and replaced with valuing paperclips might look like.
Indeed, and such an anthropomorphic optimizer would soon cease to be a paperclip optimizer at all if it could realize the “pointlessness” of its task and re-evaluate its goals.
Well, humans have existentialism despite no utility of it. It just seems like a glitch that you end up having, when your conciousness/intelligence achieves certain level (my reasoning is thus: high intelligence needs analysing many “points of view”, many counterfactuals. Technicaly, they end up internalized to some point.) Human trying to excel his GI, which is a process allowing him to reproduce better, ends up wondering the meaning of life. It could in turn drastically decrease his willingness to reproduce, but it is overridden by imperatives. In the same way, I belive AGI would have subjective conscious experiences—as a form of glitch of general intelligence.
Well, glitch or not, I’m glad to have it; I would not want to be an unconscious automaton! As Socrates said, “The life which is unexamined is not worth living.”
However, it remains to be seen whether consciousness is an automatic by-product of general intelligence. It could be the case that consciousness is an evolved trait of organic creatures with an implicit, inexact utility function. Perhaps a creature with an evolved sense of self and a desire for that self to continue to exist is more likely to produce offspring than one with no such sense of self. If this is the reason that we are conscious, then there is no reason to believe that an AGI will be conscious.
“I would not want to be an unconscious automaton!”
I strongly doubt that such sentence bear any meaning.
.