The idea that people are actively working to bring about self-improving, smarter-than-humanity intelligences scares me, because I think you’re blind to your own ruthless selfishness (not meant pejoratively) and thus think that by creating something smarter than us (and therefore you) it can also attempt to be kind to us, as you perceive yourself to be attempting to be kind to people generally.
In contrast, I don’t see either of you as Gandhi-types (here I’m referring to the archetypal elements of Gandhi’s self-cultivated image, not his actual life-in-practice). It may be a hubris-derived bias that makes you think otherwise. I don’t see any singulatarians performing and attempt to engage in minimal pleasurable resource use to maximize their ability to save currently existing lives. Instead I see thousands or millions of people dying daily, permanently, while leading singularians enjoy a variety of life’s simple pleasures.
My prescriptive solution: more selfishness, fear, and paranoia on your end. Be thankful that you’re apparently (big caveat) one of the smartest entities in apparent reality and there’s apparently nothing of much greater intelligence seeking resources in your shared environment. Rather than consciously try to bring about a singularity, I think we should race against a naturally occuring singularity to understand the various existential threats to us and to minimize them.
At the same time, I think we should try to realistically assess more mundane existential threats and threats to our personal persistence, and try to minimize these too with what seems to be the best proportionate energy and effort.
But the rationalizations of why people are trying to inentionally create a self-improving intelligence smarter than humanity seem to me to be very, very weak, and could be unecessarily catastrophic to our existence.
Nick and Eliezer, are you still Singularitarians?
http://en.wikipedia.org/wiki/Singularitarian
The idea that people are actively working to bring about self-improving, smarter-than-humanity intelligences scares me, because I think you’re blind to your own ruthless selfishness (not meant pejoratively) and thus think that by creating something smarter than us (and therefore you) it can also attempt to be kind to us, as you perceive yourself to be attempting to be kind to people generally.
In contrast, I don’t see either of you as Gandhi-types (here I’m referring to the archetypal elements of Gandhi’s self-cultivated image, not his actual life-in-practice). It may be a hubris-derived bias that makes you think otherwise. I don’t see any singulatarians performing and attempt to engage in minimal pleasurable resource use to maximize their ability to save currently existing lives. Instead I see thousands or millions of people dying daily, permanently, while leading singularians enjoy a variety of life’s simple pleasures.
My prescriptive solution: more selfishness, fear, and paranoia on your end. Be thankful that you’re apparently (big caveat) one of the smartest entities in apparent reality and there’s apparently nothing of much greater intelligence seeking resources in your shared environment. Rather than consciously try to bring about a singularity, I think we should race against a naturally occuring singularity to understand the various existential threats to us and to minimize them.
At the same time, I think we should try to realistically assess more mundane existential threats and threats to our personal persistence, and try to minimize these too with what seems to be the best proportionate energy and effort.
But the rationalizations of why people are trying to inentionally create a self-improving intelligence smarter than humanity seem to me to be very, very weak, and could be unecessarily catastrophic to our existence.