In other words, there are things that are friends. There are things that are enemies. It takes a separate assertion that those are the only two categories (as opposed to believing something like “some people are indifferent to me”).
In relation to AI, there is malicious AI (the Straumli Perversion), indifferent AI (Accelerando AI), and FAI. When EY says uFAI, he means both malicious and indifferent. But it is a distinct insight to say that indifferent AI are practically as dangerous as malicious AI. For example, it is not obvious that an AI whose only goal is to leave the Milky Way galaxy (and is capable of trying without directly harming humanity) is too dangerous to turn on. Leaving aside the motivation for creating such an entity, I certainly would agree with EY that such an entity has a substantial chance of being an existential risk to humanity.
It’s equivocation fallacy.
In other words, there are things that are friends. There are things that are enemies. It takes a separate assertion that those are the only two categories (as opposed to believing something like “some people are indifferent to me”).
In relation to AI, there is malicious AI (the Straumli Perversion), indifferent AI (Accelerando AI), and FAI. When EY says uFAI, he means both malicious and indifferent. But it is a distinct insight to say that indifferent AI are practically as dangerous as malicious AI. For example, it is not obvious that an AI whose only goal is to leave the Milky Way galaxy (and is capable of trying without directly harming humanity) is too dangerous to turn on. Leaving aside the motivation for creating such an entity, I certainly would agree with EY that such an entity has a substantial chance of being an existential risk to humanity.