In the real world if I believe that “anyone who isn’t my enemy is my friend” and you believe that “anyone who isn’t my friend is my enemy”, we believe different things.
Can you explain what those things are? I can’t see the distinction. The first follows necessarily from the second, and vice-versa.
I’ve known Sam since we were kids together, we enjoy each others’ company and act in one another’s interests. I’ve known Doug since we were kids together, we can’t stand one another and act against one another’s interests. I’ve never met Ethel in my life and know nothing about her; she lives on the other side of the planet and has never heard of me.
It seems fair to say that Sam is my friend, and Doug is my enemy. But what about Ethel?
If I believe “anyone who isn’t my enemy is my friend,” then I can evaluate Ethel for enemyhood. Do we dislike one another? Do we act against one another’s interests? No, we do not. Thus we aren’t enemies… and it follows from my belief that Ethel is my friend.
If I believe “anyone who isn’t my friend is my enemy,” then I can evaluate Ethel for friendhood. Do we like one another? Do we act in one another’s interests? No, we do not. Thus we aren’t friends… and it follows from my belief that Ethel is my enemy.
I think it more correct to say that Ethel is neither my friend nor my enemy. Thus, I consider Ethel an example of someone who isn’t my friend, and isn’t my enemy. Thus I think both of those beliefs are false. But even if I’m wrong, it seems clear that they are different beliefs, since they make different predictions about Ethel.
If I believe “anyone who isn’t my enemy is my friend,” then I can evaluate Ethel for enemyhood. Do we dislike one another? Do we act against one another’s interests? No, we do not. Thus we aren’t enemies… and it follows from my belief that Ethel is my friend.
If I believe “anyone who isn’t my friend is my enemy,” then I can evaluate Ethel for friendhood. Do we like one another? Do we act in one another’s interests? No, we do not. Thus we aren’t friends… and it follows from my belief that Ethel is my enemy.
Thanks—that’s interesting.
It seems to me that this analysis only makes sense if you actually have the non-excluded middle of “neither my friend nor my enemy”. Once you’ve accepted that the world is neatly carved up into “friends” and “enemies”, it seems you’d say “I don’t know whether Ethel is my friend or my enemy”—I don’t see why the person in the first case doesn’t just as well evaluate Ethel for friendhood, and thus conclude she isn’t an enemy. Note that one who believes “anyone who isn’t my enemy is my friend” also should thus believe “anyone who isn’t my friend is my enemy” as a (logically equivalent) corollary.
Am I missing something here about the way people talk / reason? I can’t really imagine thinking that way.
Edit: In case it wasn’t clear enough that they’re logically equivalent:
Yes, I agree that if everyone in the world is either my friend or my enemy, then “anyone who isn’t my enemy is my friend” is equivalent to “anyone who isn’t my friend is my enemy.”
But there do, in fact, exist people who are neither my friend nor my enemy.
If “everyone who is not my friend is my enemy”, then there does not exist anyone who is neither my friend nor my enemy. You can therefore say that the statement is wrong, but the statements are equivalent without any extra assumptions.
ISTM that the two statements are equivalent denotationally (they both mean “each person is either my friend or my enemy”) but not connotationally (the first suggests that most people are my friends, the latter suggests that most people are my enemies).
In other words, there are things that are friends. There are things that are enemies. It takes a separate assertion that those are the only two categories (as opposed to believing something like “some people are indifferent to me”).
In relation to AI, there is malicious AI (the Straumli Perversion), indifferent AI (Accelerando AI), and FAI. When EY says uFAI, he means both malicious and indifferent. But it is a distinct insight to say that indifferent AI are practically as dangerous as malicious AI. For example, it is not obvious that an AI whose only goal is to leave the Milky Way galaxy (and is capable of trying without directly harming humanity) is too dangerous to turn on. Leaving aside the motivation for creating such an entity, I certainly would agree with EY that such an entity has a substantial chance of being an existential risk to humanity.
Can you explain what those things are? I can’t see the distinction. The first follows necessarily from the second, and vice-versa.
Consider three people: Sam, Ethel, and Doug.
I’ve known Sam since we were kids together, we enjoy each others’ company and act in one another’s interests. I’ve known Doug since we were kids together, we can’t stand one another and act against one another’s interests. I’ve never met Ethel in my life and know nothing about her; she lives on the other side of the planet and has never heard of me.
It seems fair to say that Sam is my friend, and Doug is my enemy. But what about Ethel?
If I believe “anyone who isn’t my enemy is my friend,” then I can evaluate Ethel for enemyhood. Do we dislike one another? Do we act against one another’s interests? No, we do not. Thus we aren’t enemies… and it follows from my belief that Ethel is my friend.
If I believe “anyone who isn’t my friend is my enemy,” then I can evaluate Ethel for friendhood. Do we like one another? Do we act in one another’s interests? No, we do not. Thus we aren’t friends… and it follows from my belief that Ethel is my enemy.
I think it more correct to say that Ethel is neither my friend nor my enemy. Thus, I consider Ethel an example of someone who isn’t my friend, and isn’t my enemy. Thus I think both of those beliefs are false. But even if I’m wrong, it seems clear that they are different beliefs, since they make different predictions about Ethel.
Thanks—that’s interesting.
It seems to me that this analysis only makes sense if you actually have the non-excluded middle of “neither my friend nor my enemy”. Once you’ve accepted that the world is neatly carved up into “friends” and “enemies”, it seems you’d say “I don’t know whether Ethel is my friend or my enemy”—I don’t see why the person in the first case doesn’t just as well evaluate Ethel for friendhood, and thus conclude she isn’t an enemy. Note that one who believes “anyone who isn’t my enemy is my friend” also should thus believe “anyone who isn’t my friend is my enemy” as a (logically equivalent) corollary.
Am I missing something here about the way people talk / reason? I can’t really imagine thinking that way.
Edit: In case it wasn’t clear enough that they’re logically equivalent:
Edit: long proof was long.
¬Fx → Ex ≡ Fx ∨ Ex ≡ ¬Ex → Fx
I’m guessing that the difference in the way language is actually used is a matter of which we are being pickier about, and which happens “by default”.
Yes, I agree that if everyone in the world is either my friend or my enemy, then “anyone who isn’t my enemy is my friend” is equivalent to “anyone who isn’t my friend is my enemy.”
But there do, in fact, exist people who are neither my friend nor my enemy.
If “everyone who is not my friend is my enemy”, then there does not exist anyone who is neither my friend nor my enemy. You can therefore say that the statement is wrong, but the statements are equivalent without any extra assumptions.
ISTM that the two statements are equivalent denotationally (they both mean “each person is either my friend or my enemy”) but not connotationally (the first suggests that most people are my friends, the latter suggests that most people are my enemies).
It’s equivocation fallacy.
In other words, there are things that are friends. There are things that are enemies. It takes a separate assertion that those are the only two categories (as opposed to believing something like “some people are indifferent to me”).
In relation to AI, there is malicious AI (the Straumli Perversion), indifferent AI (Accelerando AI), and FAI. When EY says uFAI, he means both malicious and indifferent. But it is a distinct insight to say that indifferent AI are practically as dangerous as malicious AI. For example, it is not obvious that an AI whose only goal is to leave the Milky Way galaxy (and is capable of trying without directly harming humanity) is too dangerous to turn on. Leaving aside the motivation for creating such an entity, I certainly would agree with EY that such an entity has a substantial chance of being an existential risk to humanity.