“Indifferent AI” would be a better name than “Unfriendly AI”.
It would unfortunately come with misleading connotations. People don’t usually associate ‘indifferent’ with ‘is certain to kill you, your family, your friends and your species’. People already get confused enough about ‘indifferent’ AIs without priming them with that word.
Would “Non-Friendly AI” satisfy your concerns? That gets rid of those of the connotations of ‘unfriendly’ that are beyond merely being ‘something-other-than-friendly’.
We could gear several names to have maximum impact with their intended recipients, e.g. the “Takes-Away-Your-Second-Amendment-Rights AI”, or “Freedom-Destroying AI”, “Will-Make-It-So-No-More-Beetusjuice-Is-Sold AI” etc. All strictly speaking true properties for UFAIs.
Uncaring AI? The correlate could stay ‘Friendly AI’, as I presume to assume acting in a friendly fashion is easier to identify than capability for emotions/values and emotion/value motivated action.
Reading this comment encourages me to think that Unfriendly AI is part of a political campaign to rally humans against a competing intelligent group by manipulating their feelings negatively towards that group. It is as if we believe that the Nazis were not wrong for using propaganda to advance their race, they just had the wrong target, OR they started too late to succeed, something lesswrongers are worried about doing with AI.
Should we have a discussion whether it is immoral to campaign against AI we deem as unfriendly, or would it be better to just participate in the campaign against AI by downvoting any suggestion that this might be so? Is a consideration that seeking only FAI might be immoral a basilisk?
I prefer the selective capitalisation of “unFriendly AI”. This emphasizes that it’s just any AI other than a Friendly AI, but still gets the message across that it’s dangerous.
There are some AI in works of fiction that you could describe as indifferent. The one in neuromancer for example just wants to talk to other AI in the universe and doesn’t try to transform all resources on earth into material to run itself.
An AI that does try to grow itself like a cancer is on the other hand unfriendly.
If you take about something like the malaria virus we also wouldn’t call the virus indifferent but unfriendly towards humans even if the virus just tries to spread itself and doesn’t have the goal of killing humans.
Eliezer assumes in the meta-ethics sequence that you cannot really ever talk outside of your general moral frame. By that assumption (which I think he is still making), Indifferent AI would be friendly or inactive. Unfriendly AI better conveys the externality to humans morality.
But certainly someone who talks about human rights and values the survival of the species is speaking less constrained by moral frame than somebody who values only her race or her nation or her clan and considers all other humans as though they were another species competing with “us.”
How wrong am I to incorporate AI in my ideas of “us,” with the possible result that I enable a universe where AI might thrive even without what we now think of as human? Would this not be analogous to a pure caucasian human supporting values that lead to a future of a light-brown human race, a race with no pure caucasian still in it? Would this Caucasian have to be judged to have committed some sort of CEV-version of genocide?
“AI” is really all of mindspace except the tiny human dot. There’s an article about it around here somewhere. PLENTY of AIs are indeed correctly incorporated in “us”, and indeed unless things go horribly wrong “what we now think of as humans” will be extinct and replaced with these wast and alien things. Think of daleks and GLADoS and chuthulu and Babyeaters here. These are mostly as close to friendly as most humans are, and we’re trusting humans to make the seed FAI in the first place.
Unfiendly AI are not like that. The process of evolution itself is basically a very stupid UFAI. Or a pandemic. or the intuition pump in this article http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ . Or even something like a supernova. It’s not a character, not even an “evil” one.
((yea this is a gross oversimplification, I’m aiming mostly at causing true intuitions here, not causing true explicit beliefs. The phenomena is related to metaphor.))
Friendly AI has such a wonderfully anthropocentric bias! If the baby-eaters (a non-human natural intelligence species) has what they called a Friendly AI, it would be an UAI to humans, just as the baby-eaters are an Unfriendly Natural Intelligence to humans.
Friendly AI as used here would be a meaningless concept in a universe without humans. Friendliness is not a property of the AI, it is a moral (or aesthetic) judgement on an AI made by certain humans.
Gray Wolves and Dogs are the same species. Dogs are basically the FNI (Friendly Natural Intelligence) version of a Wolf, which while actually on the scale of such things is an Indifferent Natural Intelligence, but would easily pass as Unfriendly Natural Intelligence as they are pretty dangerous to have around because they will violently assert their interests over ours.
FAI seems to me to be the domesticated version of AI. When you domesticate something smarter than you are, an alternative value-laden descriptor might be SAI, Slave Artificial Intelligence. But that is not a value laden term people favoring the development of FAI would be likely to value.
“Indifferent AI” would be a better name than “Unfriendly AI”.
It would unfortunately come with misleading connotations. People don’t usually associate ‘indifferent’ with ‘is certain to kill you, your family, your friends and your species’. People already get confused enough about ‘indifferent’ AIs without priming them with that word.
Would “Non-Friendly AI” satisfy your concerns? That gets rid of those of the connotations of ‘unfriendly’ that are beyond merely being ‘something-other-than-friendly’.
We could gear several names to have maximum impact with their intended recipients, e.g. the “Takes-Away-Your-Second-Amendment-Rights AI”, or “Freedom-Destroying AI”, “Will-Make-It-So-No-More-Beetusjuice-Is-Sold AI” etc. All strictly speaking true properties for UFAIs.
Uncaring AI? The correlate could stay ‘Friendly AI’, as I presume to assume acting in a friendly fashion is easier to identify than capability for emotions/values and emotion/value motivated action.
Reading this comment encourages me to think that Unfriendly AI is part of a political campaign to rally humans against a competing intelligent group by manipulating their feelings negatively towards that group. It is as if we believe that the Nazis were not wrong for using propaganda to advance their race, they just had the wrong target, OR they started too late to succeed, something lesswrongers are worried about doing with AI.
Should we have a discussion whether it is immoral to campaign against AI we deem as unfriendly, or would it be better to just participate in the campaign against AI by downvoting any suggestion that this might be so? Is a consideration that seeking only FAI might be immoral a basilisk?
I prefer the selective capitalisation of “unFriendly AI”. This emphasizes that it’s just any AI other than a Friendly AI, but still gets the message across that it’s dangerous.
There are some AI in works of fiction that you could describe as indifferent. The one in neuromancer for example just wants to talk to other AI in the universe and doesn’t try to transform all resources on earth into material to run itself.
An AI that does try to grow itself like a cancer is on the other hand unfriendly.
If you take about something like the malaria virus we also wouldn’t call the virus indifferent but unfriendly towards humans even if the virus just tries to spread itself and doesn’t have the goal of killing humans.
That’s… actually a pretty good metaphor. Benign tumor AI vs. malignant tumor AI?
Eliezer assumes in the meta-ethics sequence that you cannot really ever talk outside of your general moral frame. By that assumption (which I think he is still making), Indifferent AI would be friendly or inactive. Unfriendly AI better conveys the externality to humans morality.
Perhaps you can never get all the way out.
But certainly someone who talks about human rights and values the survival of the species is speaking less constrained by moral frame than somebody who values only her race or her nation or her clan and considers all other humans as though they were another species competing with “us.”
How wrong am I to incorporate AI in my ideas of “us,” with the possible result that I enable a universe where AI might thrive even without what we now think of as human? Would this not be analogous to a pure caucasian human supporting values that lead to a future of a light-brown human race, a race with no pure caucasian still in it? Would this Caucasian have to be judged to have committed some sort of CEV-version of genocide?
“AI” is really all of mindspace except the tiny human dot. There’s an article about it around here somewhere. PLENTY of AIs are indeed correctly incorporated in “us”, and indeed unless things go horribly wrong “what we now think of as humans” will be extinct and replaced with these wast and alien things. Think of daleks and GLADoS and chuthulu and Babyeaters here. These are mostly as close to friendly as most humans are, and we’re trusting humans to make the seed FAI in the first place.
Unfiendly AI are not like that. The process of evolution itself is basically a very stupid UFAI. Or a pandemic. or the intuition pump in this article http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ . Or even something like a supernova. It’s not a character, not even an “evil” one.
((yea this is a gross oversimplification, I’m aiming mostly at causing true intuitions here, not causing true explicit beliefs. The phenomena is related to metaphor.))
Interesting point.
Friendly AI has such a wonderfully anthropocentric bias! If the baby-eaters (a non-human natural intelligence species) has what they called a Friendly AI, it would be an UAI to humans, just as the baby-eaters are an Unfriendly Natural Intelligence to humans.
Friendly AI as used here would be a meaningless concept in a universe without humans. Friendliness is not a property of the AI, it is a moral (or aesthetic) judgement on an AI made by certain humans.
Gray Wolves and Dogs are the same species. Dogs are basically the FNI (Friendly Natural Intelligence) version of a Wolf, which while actually on the scale of such things is an Indifferent Natural Intelligence, but would easily pass as Unfriendly Natural Intelligence as they are pretty dangerous to have around because they will violently assert their interests over ours.
FAI seems to me to be the domesticated version of AI. When you domesticate something smarter than you are, an alternative value-laden descriptor might be SAI, Slave Artificial Intelligence. But that is not a value laden term people favoring the development of FAI would be likely to value.