OK, but if we are positing the creation of artificial superintelligences, why wouldn’t they also be morally superior to us? I find this fear of a superintelligence wanting to tile the universe with paperclips absurd; why is that likely to be the summum bonum to a being vastly smarter than us? Aren’t smarter humans generally more benevolent toward animals than stupider humans and animals? Why shouldn’t this hold for AI’s? And if you say that the AI might be so much smarter than us that we will be like ants to it, then why would you care if such a species decides that the world would be better off without us? From a larger cosmic perspective, at that point we will have given birth to gods, and can happily meet our evolutionary fate knowing that our mind children will have vastly more interesting lives than we ever could have. So I don’t really understand the problem here. I guess you could say that I have faith in the universe’s capacity to evolve life toward more intelligent and interesting configurations, because for the last several billion years this has been the case, and I don’t see any reason to think that this process will suddenly reverse itself.
if we are positing the creation of artificial superintelligences, why wouldn’t they also be morally superior to us?
If morality is a natural product of intelligence, without reference to anything else, then they would be.
If morality is not solely a product of intelligence, but also depends on some other thing X in addition to intelligence, then they might not be, because of different values of X.
Would you agree with that so far? If not, you can ignore the rest of this comment, as it won’t make much sense.
If so...a lot of folks here believe that morality is not solely a product of intelligence, but also depends on some other things, which we generally refer to as values. Two equally intelligent systems with different values might well have different moralities.
If that’s true, then if we want to create a morally superior intelligence, we need to properly engineer both its intelligence and its values.
why is [tiling the universe with paperclips ] likely to be the summum bonum of a being vastly smarter than us?
It isn’t, nor does anyone claim that it is. If you’ve gotten the impression that the prevailing opinion here is that tiling the universe with paperclips is a particularly likely outcome, I suspect you are reading casually and failing to understand underlying intended meanings.
Aren’t smarter humans generally more benevolent toward animals than stupider humans?
Maybe? I don’t know that this is true. Even if it is true, it’s problematic to infer causation from correlation, and even more problematic to infer particular causal mechanisms. It might be, for example, that expressed benevolence towards animals is a product of social signaling, which correlates with intelligence in complex ways. Or any of a thousand other things might be true.
Why shouldn’t this hold for AI’s?
Well, for one thing, because (as above) it might not even hold for humans outside of a narrow band of intelligence levels and social structures. For another, because what holds for humans might not hold for AIs if the AIs have different values.
And if you say that the AI might be so much smarter than us that we will be like ants to it, then why would you care if such a species exterminated us?
Because I might prefer we not be exterminated.
From a larger cosmic perspective, at that point we will have given birth to gods, and can happily meet our evolutionary fate knowing that our mind children will have vastly more interesting lives than we ever could have.
If that makes you happy, great. It sounds like you’re insisting that it ought to make me happy too. I disagree. There are many types of gods I would not be happy to have replaced humanity with.
So I really don’t understand the problem here
That’s fine. You aren’t obligated to.
-- you might say that I have faith in the universe’s capacity to evolve life toward more intelligent and interesting configurations, because for the last several billion years this has been the case and I don’t see any reason to think that this process will suddenly end.
OK, but if we are positing the creation of artificial superintelligences, why wouldn’t they also be morally superior to us? I find this fear of a superintelligence wanting to tile the universe with paperclips absurd; why is that likely to be the summum bonum to a being vastly smarter than us? Aren’t smarter humans generally more benevolent toward animals than stupider humans and animals? Why shouldn’t this hold for AI’s? And if you say that the AI might be so much smarter than us that we will be like ants to it, then why would you care if such a species decides that the world would be better off without us? From a larger cosmic perspective, at that point we will have given birth to gods, and can happily meet our evolutionary fate knowing that our mind children will have vastly more interesting lives than we ever could have. So I don’t really understand the problem here. I guess you could say that I have faith in the universe’s capacity to evolve life toward more intelligent and interesting configurations, because for the last several billion years this has been the case, and I don’t see any reason to think that this process will suddenly reverse itself.
If morality is a natural product of intelligence, without reference to anything else, then they would be.
If morality is not solely a product of intelligence, but also depends on some other thing X in addition to intelligence, then they might not be, because of different values of X.
Would you agree with that so far? If not, you can ignore the rest of this comment, as it won’t make much sense.
If so...a lot of folks here believe that morality is not solely a product of intelligence, but also depends on some other things, which we generally refer to as values. Two equally intelligent systems with different values might well have different moralities.
If that’s true, then if we want to create a morally superior intelligence, we need to properly engineer both its intelligence and its values.
It isn’t, nor does anyone claim that it is.
If you’ve gotten the impression that the prevailing opinion here is that tiling the universe with paperclips is a particularly likely outcome, I suspect you are reading casually and failing to understand underlying intended meanings.
Maybe? I don’t know that this is true. Even if it is true, it’s problematic to infer causation from correlation, and even more problematic to infer particular causal mechanisms. It might be, for example, that expressed benevolence towards animals is a product of social signaling, which correlates with intelligence in complex ways. Or any of a thousand other things might be true.
Well, for one thing, because (as above) it might not even hold for humans outside of a narrow band of intelligence levels and social structures. For another, because what holds for humans might not hold for AIs if the AIs have different values.
Because I might prefer we not be exterminated.
If that makes you happy, great.
It sounds like you’re insisting that it ought to make me happy too. I disagree.
There are many types of gods I would not be happy to have replaced humanity with.
That’s fine. You aren’t obligated to.
Sure, you might very well have such faith.