The paper had nothing to do with what you talked about in your opening paragraph
What? Your post starts with:
My goal in this essay is to analyze some widely discussed scenarios that predict dire and almost unavoidable negative behavior from future artificial general intelligences, even if they are programmed to be friendly to humans.
Eli’s opening paragraph explains the “basic UFAI doomsday scenario”. How is this not what you talked about?
The paper’s goal is not to discuss “basic UFAI doomsday scenarios” in the general sense, but to discuss the particular case where the AI goes all pear-shaped EVEN IF it is programmed to be friendly to humans.
That last part (even if it is programmed to be friendly to humans) is the critical qualifier that narrows down the discussion to those particular doomsday scenarios in which the AI does claim to be trying to be friendly to humans—it claims to be maximizing human happiness—but in spite of that it does something insanely wicked.
So, Eli says:
The basic UFAI doomsday scenario is: the AI has vast powers of learning and inference with respect to its world-model, but has its utility function (value system) hardcoded. Since the hardcoded utility function does not specify a naturalization of morality, or CEV, or whatever, the UFAI proceeds to tile the universe in whatever it happens to like (which are things we people don’t like), precisely because it has no motivation to “fix” its hardcoded utility function
… and this clearly says that the type of AI he has in mind is one that is not even trying to be friendly. Rather, he talks about how its
hardcoded utility function does not specify a naturalization of morality, or CEV, or whatever
And then he adds that
the UFAI proceeds to tile the universe in whatever it happens to like
… which has nothing to do with the cases that the entire paper is about, namely the cases where the AI is trying really hard to be friendly, but doing it in a way that we did not intend.
If you read the paper all of this is obvious pretty quickly, but perhaps if you only skim-read a few paragraphs you might get the wrong impression. I suspect that is what happened.
namely the cases where the AI is trying really hard to be friendly, but doing it in a way that we did not intend.
If the AI knows what friendly is or what mean means, than your conclusion is trivially true. The problem is programming those in—that’s what FAI is all about.
I still agree with Eli and think you’re “really failing to clarify the issue”, and claiming that xyz is not the issue does not resolve anything. Disengaging.
What? Your post starts with:
Eli’s opening paragraph explains the “basic UFAI doomsday scenario”. How is this not what you talked about?
The paper’s goal is not to discuss “basic UFAI doomsday scenarios” in the general sense, but to discuss the particular case where the AI goes all pear-shaped EVEN IF it is programmed to be friendly to humans.
That last part (even if it is programmed to be friendly to humans) is the critical qualifier that narrows down the discussion to those particular doomsday scenarios in which the AI does claim to be trying to be friendly to humans—it claims to be maximizing human happiness—but in spite of that it does something insanely wicked.
So, Eli says:
… and this clearly says that the type of AI he has in mind is one that is not even trying to be friendly. Rather, he talks about how its
And then he adds that
… which has nothing to do with the cases that the entire paper is about, namely the cases where the AI is trying really hard to be friendly, but doing it in a way that we did not intend.
If you read the paper all of this is obvious pretty quickly, but perhaps if you only skim-read a few paragraphs you might get the wrong impression. I suspect that is what happened.
If the AI knows what friendly is or what mean means, than your conclusion is trivially true. The problem is programming those in—that’s what FAI is all about.
I still agree with Eli and think you’re “really failing to clarify the issue”, and claiming that xyz is not the issue does not resolve anything. Disengaging.