Shane, I think you’re underestimating the idiosyncrasy of morality. Suppose that I show you the sentence “This sentence is false.” Do you convert it to ASCII, add up the numbers, factorize the result, and check if there are two square factors? No; it would be easy enough for you to do so, but why bother? The concept “sentences whose ASCII conversion of their English serialization sums to a number with two square factors” is not, to you, an interesting way to carve up reality.
Suppose that, driving along the highway, I see someone riding a motorcycle at high speed, zipping between cars. It’s too late for me to call out to them, but I might think something along the lines of, “Now there goes someone who needs to be wearing a helmet.” Why do I think this? Why is it a natural concept to me? It doesn’t play a role in my predictions—so far as prediction is concerned, I predict that this guy will continue not wearing a helmet, and has a high probability of ending up as a smear on the asphault. No, the reason why this is a natural thought to me is that human life is something I care about, that is, it plays a direct role as one of my current terminal values.
A moderately strong and unFriendly intelligence, operating in the current world without yet having replaced that world with paperclips, would certainly find it natural to form the category of “Things that (some) humans approve of”, and contrast it to “Things that will trigger a nuclear attack against me before I’m done creating my own nanotechnology.” But this category is not what we call “morality”. It naturally—from the AI’s perspective—includes things like bribes and deception, not just the particular class of human-approval-eliciting phenomena that we call “moral”.
Is it worth factoring out phenomena that elicit human feelings of righteousness, and working out how (various) humans reason about them? Yes, because this is an important subset of ways to persuade the humans to leave you alone until it’s too late; but again, that natural category is going to include persuasive techniques like references to religious authority and nationalism.
But what if the AI encounters some more humanistic, atheistic types? Then the AI will predict which of several available actions is most likely to make an atheistic humanist human show sympathy for the AI. This naturally leads the AI to model and predict the human’s internal moral reasoning—but that model isn’t going to distinguish anything along the lines of moral reasoning the human would approve of under long-term reflection, or moral reasoning the human would approve knowing the true facts. That’s just not a natural category to the AI, because the human isn’t going to get a chance for long-term reflection, and the human doesn’t know the true facts.
The natural, predictive, manipulative question, is not “What would this human want knowing the true facts?”, but “What will various behaviors make this human believe, and what will the human do on the basis of these various (false) beliefs?”
In short, all models that an unFriendly AI forms of human moral reasoning, while we can expect them to be highly empirically accurate and well-calibrated to the extent that the AI is highly intelligent, would be formed for the purpose of predicting human reactions to different behaviors and events, so that these behaviors and events can be chosen manipulatively.
But what we regard as morality is an idealized form of such reasoning—the idealized abstracted dynamic built out of such intuitions. The unFriendly AI has no reason to think about anything we would call “moral progress” unless it is naturally occurring on a timescale short enough to matter before the AI wipes out the human species. It has no reason to ask the question “What would humanity want in a thousand years?” any more than you have reason to add up the ASCII letters in a sentence.
Now it might be only a short step from a strictly predictive model of human reasoning, to the idealized abstracted dynamic of morality. If you think about the point of CEV, it’s that you can get an AI to learn most of the information it needs to model morality, by looking at humans—and that the step from these empirical models, to idealization, is relatively short and traversable by the programmers directly or with the aid of manageable amounts of inductive learning. Though CEV’s current description is not precise, and maybe any realistic description of idealization would be more complicated.
But regardless, if the idealized computation we would think of as describing “what is right” is even a short distance of idealization away from strictly predictive and manipulative models of what humans can be made to think is right, then “actually right” is still something that an unFriendly AI would literally never think about, since humans have no direct access to “actually right” (the idealized result of their own thought processes) and hence it plays no role in their behavior and hence is not needed to model or manipulate them.
Which is to say, an unFriendly AI would never once think about morality—only a certain psychological problem in manipulating humans, where the only thing that matters is anything you can make them believe or do. There is no natural motive to think about anything else, and no natural empirical category corresponding to it.
Shane, I think you’re underestimating the idiosyncrasy of morality. Suppose that I show you the sentence “This sentence is false.” Do you convert it to ASCII, add up the numbers, factorize the result, and check if there are two square factors? No; it would be easy enough for you to do so, but why bother? The concept “sentences whose ASCII conversion of their English serialization sums to a number with two square factors” is not, to you, an interesting way to carve up reality.
Suppose that, driving along the highway, I see someone riding a motorcycle at high speed, zipping between cars. It’s too late for me to call out to them, but I might think something along the lines of, “Now there goes someone who needs to be wearing a helmet.” Why do I think this? Why is it a natural concept to me? It doesn’t play a role in my predictions—so far as prediction is concerned, I predict that this guy will continue not wearing a helmet, and has a high probability of ending up as a smear on the asphault. No, the reason why this is a natural thought to me is that human life is something I care about, that is, it plays a direct role as one of my current terminal values.
A moderately strong and unFriendly intelligence, operating in the current world without yet having replaced that world with paperclips, would certainly find it natural to form the category of “Things that (some) humans approve of”, and contrast it to “Things that will trigger a nuclear attack against me before I’m done creating my own nanotechnology.” But this category is not what we call “morality”. It naturally—from the AI’s perspective—includes things like bribes and deception, not just the particular class of human-approval-eliciting phenomena that we call “moral”.
Is it worth factoring out phenomena that elicit human feelings of righteousness, and working out how (various) humans reason about them? Yes, because this is an important subset of ways to persuade the humans to leave you alone until it’s too late; but again, that natural category is going to include persuasive techniques like references to religious authority and nationalism.
But what if the AI encounters some more humanistic, atheistic types? Then the AI will predict which of several available actions is most likely to make an atheistic humanist human show sympathy for the AI. This naturally leads the AI to model and predict the human’s internal moral reasoning—but that model isn’t going to distinguish anything along the lines of moral reasoning the human would approve of under long-term reflection, or moral reasoning the human would approve knowing the true facts. That’s just not a natural category to the AI, because the human isn’t going to get a chance for long-term reflection, and the human doesn’t know the true facts.
The natural, predictive, manipulative question, is not “What would this human want knowing the true facts?”, but “What will various behaviors make this human believe, and what will the human do on the basis of these various (false) beliefs?”
In short, all models that an unFriendly AI forms of human moral reasoning, while we can expect them to be highly empirically accurate and well-calibrated to the extent that the AI is highly intelligent, would be formed for the purpose of predicting human reactions to different behaviors and events, so that these behaviors and events can be chosen manipulatively.
But what we regard as morality is an idealized form of such reasoning—the idealized abstracted dynamic built out of such intuitions. The unFriendly AI has no reason to think about anything we would call “moral progress” unless it is naturally occurring on a timescale short enough to matter before the AI wipes out the human species. It has no reason to ask the question “What would humanity want in a thousand years?” any more than you have reason to add up the ASCII letters in a sentence.
Now it might be only a short step from a strictly predictive model of human reasoning, to the idealized abstracted dynamic of morality. If you think about the point of CEV, it’s that you can get an AI to learn most of the information it needs to model morality, by looking at humans—and that the step from these empirical models, to idealization, is relatively short and traversable by the programmers directly or with the aid of manageable amounts of inductive learning. Though CEV’s current description is not precise, and maybe any realistic description of idealization would be more complicated.
But regardless, if the idealized computation we would think of as describing “what is right” is even a short distance of idealization away from strictly predictive and manipulative models of what humans can be made to think is right, then “actually right” is still something that an unFriendly AI would literally never think about, since humans have no direct access to “actually right” (the idealized result of their own thought processes) and hence it plays no role in their behavior and hence is not needed to model or manipulate them.
Which is to say, an unFriendly AI would never once think about morality—only a certain psychological problem in manipulating humans, where the only thing that matters is anything you can make them believe or do. There is no natural motive to think about anything else, and no natural empirical category corresponding to it.