Or, you’re using “friendly” in the colloquial rather than strictly technical sense
No, you’re wrong about the dichotomy there. The words were used legitimately with respect to a subjectively objective concept. But never mind that.
Of all the terms in “Unfriendly Artificial Intelligence” I’d say the ‘unfriendly’ is the most straightforward. I encourage folks to go ahead and use it. Elaborate further on what specifically they are referring to as the context makes necessary.
I encourage folks to go ahead and use it. Elaborate further on what specifically they are referring to as the context makes necessary.
This implies I’m discouraging use of the term, which I’m not, when I raised the issue to point out that for this subject specificity is often not supplied by context alone and needs to be made explicit.
What is confusing is when people describe a scenario in which it is central that an AI has human suffering as a positive terminal value, and they use “unfriendly” alone as a label to discuss it. The vast majority of possible minds are the ones most overlooked: the indifferent ones. If something applies to malicious minds but not indifferent or benevolent ones, one can do better than describing the malicious minds as “either indifferent or malicious”, i.e. “unfriendly”.
I would also discourage calling blenders “not-apples” when specifically referring to machines that make apple sauce. Obviously, calling a blender a “not-apple” will never be wrong. There’s nothing wrong with talking about non-apples in general, nor talking about distinguishing them from apples, nor with saying that a blender is an example of a non-apple, nor with saying that a blender is a special kind of non-apple that, unlike other non-apples, is an anti-apple.
But when someone describes a blender and just calls it a “non-apple”, and someone else starts talking about how almost nothing is a non-apple because most things don’t pulverize apples, and every few times the subject is raised someone assumes a “non-apple” is something that pulverizes apples, it’s time for the first person to implement low-cost clarifications to his or her communication in certain contexts.
What is confusing is when people describe a scenario in which it is central that an AI has human suffering as a positive terminal value, and they use “unfriendly” alone as a label to discuss it. The vast majority of possible minds are the ones most overlooked: the indifferent ones. If something applies to malicious minds but not indifferent or benevolent ones, one can do better than describing the malicious minds as “either indifferent or malicious”, i.e. “unfriendly”.
I would use malicious in that context. A specific kind of uFAI requires a more specific word if you expect people to distinguish it from all other uFAIs.
No, you’re wrong about the dichotomy there. The words were used legitimately with respect to a subjectively objective concept. But never mind that.
Of all the terms in “Unfriendly Artificial Intelligence” I’d say the ‘unfriendly’ is the most straightforward. I encourage folks to go ahead and use it. Elaborate further on what specifically they are referring to as the context makes necessary.
This implies I’m discouraging use of the term, which I’m not, when I raised the issue to point out that for this subject specificity is often not supplied by context alone and needs to be made explicit.
What is confusing is when people describe a scenario in which it is central that an AI has human suffering as a positive terminal value, and they use “unfriendly” alone as a label to discuss it. The vast majority of possible minds are the ones most overlooked: the indifferent ones. If something applies to malicious minds but not indifferent or benevolent ones, one can do better than describing the malicious minds as “either indifferent or malicious”, i.e. “unfriendly”.
I would also discourage calling blenders “not-apples” when specifically referring to machines that make apple sauce. Obviously, calling a blender a “not-apple” will never be wrong. There’s nothing wrong with talking about non-apples in general, nor talking about distinguishing them from apples, nor with saying that a blender is an example of a non-apple, nor with saying that a blender is a special kind of non-apple that, unlike other non-apples, is an anti-apple.
But when someone describes a blender and just calls it a “non-apple”, and someone else starts talking about how almost nothing is a non-apple because most things don’t pulverize apples, and every few times the subject is raised someone assumes a “non-apple” is something that pulverizes apples, it’s time for the first person to implement low-cost clarifications to his or her communication in certain contexts.
I would use malicious in that context. A specific kind of uFAI requires a more specific word if you expect people to distinguish it from all other uFAIs.