As the article points out, shared biological needs do not much deter the bear or chimpanzee from killing you. An AI could be perfectly human—the very opposite of alien—and far more dangerous than Hitler or Dhamer.
The article is well written but dangerously wrong in its core point. AI will be far more human than alien. But alignment/altruism is mostly orthogonal to human vs alien.
Shared biological needs aren’t a guarantee of friendliness, but they do restrict the space of possibilities significantly—enough, IMO, to make the hopes of peaceful contact not entirely moot. Also here it comes with more constraints. Again, if we ever meet aliens, it will probably have to be social organisms like us, who were able to coordinate and cooperate like us, and thus can be probably reasoned with somehow. Note that we can coexist with bears and chimpanzees. We just need to not be really fucking stupid about it. Bears aren’t going to be all friendly with us, but that doesn’t mean they just kill for kicks or have no sense of self-preservation. The communication barrier is a huge issue too. If you could tell the bear “don’t eat me and I can bring you tastier food” I bet things might smooth out.
AI is not subject to those constraints. “Being optimised to produce human-like text” is a property of LLMs specifically, not all AI, and even then, its mapping to “being human-like” is mostly superficial; they still can fail in weird alien ways. But I also don’t expect AGI to just be a souped up LLM. I expect it to contain some core long term reasoning/strategizing RL model more akin to AlphaGo than to GPT-4, and that can be far more alien.
As the article points out, shared biological needs do not much deter the bear or chimpanzee from killing you. An AI could be perfectly human—the very opposite of alien—and far more dangerous than Hitler or Dhamer.
The article is well written but dangerously wrong in its core point. AI will be far more human than alien. But alignment/altruism is mostly orthogonal to human vs alien.
Shared biological needs aren’t a guarantee of friendliness, but they do restrict the space of possibilities significantly—enough, IMO, to make the hopes of peaceful contact not entirely moot. Also here it comes with more constraints. Again, if we ever meet aliens, it will probably have to be social organisms like us, who were able to coordinate and cooperate like us, and thus can be probably reasoned with somehow. Note that we can coexist with bears and chimpanzees. We just need to not be really fucking stupid about it. Bears aren’t going to be all friendly with us, but that doesn’t mean they just kill for kicks or have no sense of self-preservation. The communication barrier is a huge issue too. If you could tell the bear “don’t eat me and I can bring you tastier food” I bet things might smooth out.
AI is not subject to those constraints. “Being optimised to produce human-like text” is a property of LLMs specifically, not all AI, and even then, its mapping to “being human-like” is mostly superficial; they still can fail in weird alien ways. But I also don’t expect AGI to just be a souped up LLM. I expect it to contain some core long term reasoning/strategizing RL model more akin to AlphaGo than to GPT-4, and that can be far more alien.