It seems to me that humanity is faced with an epochal choice in this century, whether to:
a) Obsolete ourselves by submitting fully to the machine superorganism/superintelligence and embracing our posthuman destiny, or
b) Reject the radical implications of technological progress and return to various theocratic and traditionalist forms of civilization which place strict limits on technology and consider all forms of change undesirable (see the 3000-year reign of the Pharaohs, or the million-year reign of the hunter-gatherers)
Is there a plausible third option? Can we really muddle along for much longer with this strange mix of religious “man is created in the image of God”, secular humanist “man is the measure of all things” and transhumanist “man is a bridge between animal and Superman” ideologies? And why do even Singularitarians insist that there must be a happy ending for homo sapiens, when all the scientific evidence suggests otherwise? I see nothing wrong with obsoleting humanity and replacing them with vastly superior “mind children.” As far as I’m concerned this should be our civilization’s summum bonum, a rational and worthy replacement for bankrupt religious and secular humanist ideals. Robots taking human jobs is another step toward bringing the curtain down permanently on the dead-end primate dramas, so it’s good news that should be celebrated!
Robots taking human jobs is another step toward bringing the curtain down permanently on the dead-end primate dramas
Well, so is large-scale primate extermination leaving an empty husk of a planet.
The question is not so much whether the primates exist in the future, but what exists in the future and whether it’s something we should prefer to exist. I accept that there probably exists some X such that I prefer (X + no humans) to (humans), but it certainly isn’t true that for all X I prefer that.
So whether bringing that curtain down on dead-end primate dramas is something I would celebrate depends an awful lot on the nature of our “mind children.”
OK, but if we are positing the creation of artificial superintelligences, why wouldn’t they also be morally superior to us? I find this fear of a superintelligence wanting to tile the universe with paperclips absurd; why is that likely to be the summum bonum to a being vastly smarter than us? Aren’t smarter humans generally more benevolent toward animals than stupider humans and animals? Why shouldn’t this hold for AI’s? And if you say that the AI might be so much smarter than us that we will be like ants to it, then why would you care if such a species decides that the world would be better off without us? From a larger cosmic perspective, at that point we will have given birth to gods, and can happily meet our evolutionary fate knowing that our mind children will have vastly more interesting lives than we ever could have. So I don’t really understand the problem here. I guess you could say that I have faith in the universe’s capacity to evolve life toward more intelligent and interesting configurations, because for the last several billion years this has been the case, and I don’t see any reason to think that this process will suddenly reverse itself.
if we are positing the creation of artificial superintelligences, why wouldn’t they also be morally superior to us?
If morality is a natural product of intelligence, without reference to anything else, then they would be.
If morality is not solely a product of intelligence, but also depends on some other thing X in addition to intelligence, then they might not be, because of different values of X.
Would you agree with that so far? If not, you can ignore the rest of this comment, as it won’t make much sense.
If so...a lot of folks here believe that morality is not solely a product of intelligence, but also depends on some other things, which we generally refer to as values. Two equally intelligent systems with different values might well have different moralities.
If that’s true, then if we want to create a morally superior intelligence, we need to properly engineer both its intelligence and its values.
why is [tiling the universe with paperclips ] likely to be the summum bonum of a being vastly smarter than us?
It isn’t, nor does anyone claim that it is. If you’ve gotten the impression that the prevailing opinion here is that tiling the universe with paperclips is a particularly likely outcome, I suspect you are reading casually and failing to understand underlying intended meanings.
Aren’t smarter humans generally more benevolent toward animals than stupider humans?
Maybe? I don’t know that this is true. Even if it is true, it’s problematic to infer causation from correlation, and even more problematic to infer particular causal mechanisms. It might be, for example, that expressed benevolence towards animals is a product of social signaling, which correlates with intelligence in complex ways. Or any of a thousand other things might be true.
Why shouldn’t this hold for AI’s?
Well, for one thing, because (as above) it might not even hold for humans outside of a narrow band of intelligence levels and social structures. For another, because what holds for humans might not hold for AIs if the AIs have different values.
And if you say that the AI might be so much smarter than us that we will be like ants to it, then why would you care if such a species exterminated us?
Because I might prefer we not be exterminated.
From a larger cosmic perspective, at that point we will have given birth to gods, and can happily meet our evolutionary fate knowing that our mind children will have vastly more interesting lives than we ever could have.
If that makes you happy, great. It sounds like you’re insisting that it ought to make me happy too. I disagree. There are many types of gods I would not be happy to have replaced humanity with.
So I really don’t understand the problem here
That’s fine. You aren’t obligated to.
-- you might say that I have faith in the universe’s capacity to evolve life toward more intelligent and interesting configurations, because for the last several billion years this has been the case and I don’t see any reason to think that this process will suddenly end.
It seems to me that humanity is faced with an epochal choice in this century, whether to:
a) Obsolete ourselves by submitting fully to the machine superorganism/superintelligence and embracing our posthuman destiny, or
b) Reject the radical implications of technological progress and return to various theocratic and traditionalist forms of civilization which place strict limits on technology and consider all forms of change undesirable (see the 3000-year reign of the Pharaohs, or the million-year reign of the hunter-gatherers)
Is there a plausible third option? Can we really muddle along for much longer with this strange mix of religious “man is created in the image of God”, secular humanist “man is the measure of all things” and transhumanist “man is a bridge between animal and Superman” ideologies? And why do even Singularitarians insist that there must be a happy ending for homo sapiens, when all the scientific evidence suggests otherwise? I see nothing wrong with obsoleting humanity and replacing them with vastly superior “mind children.” As far as I’m concerned this should be our civilization’s summum bonum, a rational and worthy replacement for bankrupt religious and secular humanist ideals. Robots taking human jobs is another step toward bringing the curtain down permanently on the dead-end primate dramas, so it’s good news that should be celebrated!
Well, so is large-scale primate extermination leaving an empty husk of a planet.
The question is not so much whether the primates exist in the future, but what exists in the future and whether it’s something we should prefer to exist. I accept that there probably exists some X such that I prefer (X + no humans) to (humans), but it certainly isn’t true that for all X I prefer that.
So whether bringing that curtain down on dead-end primate dramas is something I would celebrate depends an awful lot on the nature of our “mind children.”
AlphaOmega is explicitly in favor of this, according to his posting history.
OK, but if we are positing the creation of artificial superintelligences, why wouldn’t they also be morally superior to us? I find this fear of a superintelligence wanting to tile the universe with paperclips absurd; why is that likely to be the summum bonum to a being vastly smarter than us? Aren’t smarter humans generally more benevolent toward animals than stupider humans and animals? Why shouldn’t this hold for AI’s? And if you say that the AI might be so much smarter than us that we will be like ants to it, then why would you care if such a species decides that the world would be better off without us? From a larger cosmic perspective, at that point we will have given birth to gods, and can happily meet our evolutionary fate knowing that our mind children will have vastly more interesting lives than we ever could have. So I don’t really understand the problem here. I guess you could say that I have faith in the universe’s capacity to evolve life toward more intelligent and interesting configurations, because for the last several billion years this has been the case, and I don’t see any reason to think that this process will suddenly reverse itself.
If morality is a natural product of intelligence, without reference to anything else, then they would be.
If morality is not solely a product of intelligence, but also depends on some other thing X in addition to intelligence, then they might not be, because of different values of X.
Would you agree with that so far? If not, you can ignore the rest of this comment, as it won’t make much sense.
If so...a lot of folks here believe that morality is not solely a product of intelligence, but also depends on some other things, which we generally refer to as values. Two equally intelligent systems with different values might well have different moralities.
If that’s true, then if we want to create a morally superior intelligence, we need to properly engineer both its intelligence and its values.
It isn’t, nor does anyone claim that it is.
If you’ve gotten the impression that the prevailing opinion here is that tiling the universe with paperclips is a particularly likely outcome, I suspect you are reading casually and failing to understand underlying intended meanings.
Maybe? I don’t know that this is true. Even if it is true, it’s problematic to infer causation from correlation, and even more problematic to infer particular causal mechanisms. It might be, for example, that expressed benevolence towards animals is a product of social signaling, which correlates with intelligence in complex ways. Or any of a thousand other things might be true.
Well, for one thing, because (as above) it might not even hold for humans outside of a narrow band of intelligence levels and social structures. For another, because what holds for humans might not hold for AIs if the AIs have different values.
Because I might prefer we not be exterminated.
If that makes you happy, great.
It sounds like you’re insisting that it ought to make me happy too. I disagree.
There are many types of gods I would not be happy to have replaced humanity with.
That’s fine. You aren’t obligated to.
Sure, you might very well have such faith.