This post explicitly lays out a view that seems to be implicit in, but not entirely clear from, many of of Eliezer’s other writings. That view is this:
There are three kinds of genies: Genies to whom you can safely say “I wish for you to do what I should wish for”; genies for which no wish is safe; and genies that aren’t very powerful or intelligent.
Even if Eliezer is right about that, I think that view of his has led to confusing usage of the term “Friendly AI.” If you accept Eliezer’s view, it may seem to make sense to not worry to much about whether by “Friendly AI” you mean:
A utopia-making machine (the AI “to whom you can safely say, ‘I wish for you to do what I should wish for.’”) Or:
A non-doomsday machine (a doomsday machine being the AI “for which no wish is safe.”)
And it would make sense not to worry too much about that distinction, if you were talking only to people who also believe those two concepts are very nearly co-extensive for powerful AI. But failing to make that distinction is obviously going to be confusing when you’re talking to people who don’t think that. It will make it harder to communicate both your ideas and your reasons for holding those ideas to them.
One solution would be to more frequently link people back to “The Hidden Complexity of Wishes” (or other writing by Eliezer that makes similar points—what else would be suitable?) But while it’s a good post and Eliezer makes some very good points with the “Outcome Pump” thought-experiment, the argument isn’t entirely convincing.
As Eliezer himself has argued at great length, (see also section 6.1 of this paper) humans’ own understanding of our values is far from perfect. None of us are, right now, qualified to design a utopia. But we do have some understanding of our own values; we can identify some things that would be improvements over our current situation while marking other scenarios as “this would be a disaster.” It seems like there might be a point in the future where we can design an AI whose understanding of human values is similarly serviceable but no better than that.
Maybe I’m wrong about that. But if I am, until there’s a better easy to read explanation of why I’m wrong for everybody to link to, it would be helpful to have different terms for (1) and (2) above. Perhaps call them “utopia AI” and “safe AI,” respectively?
I think I’ve found the source of what’s been bugging me about “Friendly AI”
In the comments on this post (which in retrospect I feel was not very clearly written), someone linked me to a post Eliezer wrote five years ago, “The Hidden Complexity of Wishes.” After reading it, I think I’ve figured out why the term “Friendly AI” is used so inconsistently.
This post explicitly lays out a view that seems to be implicit in, but not entirely clear from, many of of Eliezer’s other writings. That view is this:
Even if Eliezer is right about that, I think that view of his has led to confusing usage of the term “Friendly AI.” If you accept Eliezer’s view, it may seem to make sense to not worry to much about whether by “Friendly AI” you mean:
A utopia-making machine (the AI “to whom you can safely say, ‘I wish for you to do what I should wish for.’”) Or:
A non-doomsday machine (a doomsday machine being the AI “for which no wish is safe.”)
And it would make sense not to worry too much about that distinction, if you were talking only to people who also believe those two concepts are very nearly co-extensive for powerful AI. But failing to make that distinction is obviously going to be confusing when you’re talking to people who don’t think that. It will make it harder to communicate both your ideas and your reasons for holding those ideas to them.
One solution would be to more frequently link people back to “The Hidden Complexity of Wishes” (or other writing by Eliezer that makes similar points—what else would be suitable?) But while it’s a good post and Eliezer makes some very good points with the “Outcome Pump” thought-experiment, the argument isn’t entirely convincing.
As Eliezer himself has argued at great length, (see also section 6.1 of this paper) humans’ own understanding of our values is far from perfect. None of us are, right now, qualified to design a utopia. But we do have some understanding of our own values; we can identify some things that would be improvements over our current situation while marking other scenarios as “this would be a disaster.” It seems like there might be a point in the future where we can design an AI whose understanding of human values is similarly serviceable but no better than that.
Maybe I’m wrong about that. But if I am, until there’s a better easy to read explanation of why I’m wrong for everybody to link to, it would be helpful to have different terms for (1) and (2) above. Perhaps call them “utopia AI” and “safe AI,” respectively?