“Indifferent AI” would be a better name than “Unfriendly AI”.
It would unfortunately come with misleading connotations. People don’t usually associate ‘indifferent’ with ‘is certain to kill you, your family, your friends and your species’. People already get confused enough about ‘indifferent’ AIs without priming them with that word.
Would “Non-Friendly AI” satisfy your concerns? That gets rid of those of the connotations of ‘unfriendly’ that are beyond merely being ‘something-other-than-friendly’.
We could gear several names to have maximum impact with their intended recipients, e.g. the “Takes-Away-Your-Second-Amendment-Rights AI”, or “Freedom-Destroying AI”, “Will-Make-It-So-No-More-Beetusjuice-Is-Sold AI” etc. All strictly speaking true properties for UFAIs.
Uncaring AI? The correlate could stay ‘Friendly AI’, as I presume to assume acting in a friendly fashion is easier to identify than capability for emotions/values and emotion/value motivated action.
Reading this comment encourages me to think that Unfriendly AI is part of a political campaign to rally humans against a competing intelligent group by manipulating their feelings negatively towards that group. It is as if we believe that the Nazis were not wrong for using propaganda to advance their race, they just had the wrong target, OR they started too late to succeed, something lesswrongers are worried about doing with AI.
Should we have a discussion whether it is immoral to campaign against AI we deem as unfriendly, or would it be better to just participate in the campaign against AI by downvoting any suggestion that this might be so? Is a consideration that seeking only FAI might be immoral a basilisk?
It would unfortunately come with misleading connotations. People don’t usually associate ‘indifferent’ with ‘is certain to kill you, your family, your friends and your species’. People already get confused enough about ‘indifferent’ AIs without priming them with that word.
Would “Non-Friendly AI” satisfy your concerns? That gets rid of those of the connotations of ‘unfriendly’ that are beyond merely being ‘something-other-than-friendly’.
We could gear several names to have maximum impact with their intended recipients, e.g. the “Takes-Away-Your-Second-Amendment-Rights AI”, or “Freedom-Destroying AI”, “Will-Make-It-So-No-More-Beetusjuice-Is-Sold AI” etc. All strictly speaking true properties for UFAIs.
Uncaring AI? The correlate could stay ‘Friendly AI’, as I presume to assume acting in a friendly fashion is easier to identify than capability for emotions/values and emotion/value motivated action.
Reading this comment encourages me to think that Unfriendly AI is part of a political campaign to rally humans against a competing intelligent group by manipulating their feelings negatively towards that group. It is as if we believe that the Nazis were not wrong for using propaganda to advance their race, they just had the wrong target, OR they started too late to succeed, something lesswrongers are worried about doing with AI.
Should we have a discussion whether it is immoral to campaign against AI we deem as unfriendly, or would it be better to just participate in the campaign against AI by downvoting any suggestion that this might be so? Is a consideration that seeking only FAI might be immoral a basilisk?