An Uncanny Moat

Link post

Back in the early days of computer animation, the technology at the time really struggled with realism. The first cartoons were necessarily abstract, or cartoony.

As time progressed, the technology caught up. CGI now can be all but indistinguishable from real life. But there was a brief period, as seen in films like The Polar Express or Final Fantasy: The Spirits Within, when the artists aimed for realism and didn’t quite get there.

These films were often critically panned. Eventually, it became clear that the cause was quite deep in the human psyche. These films were realistic enough that we’d mentally classify the characters as real humans, but not so realistic that they actually looked normal. On an instinctive level, people reject these imposters far harder than more stylised graphics that don’t have the pretence of reality.

This phenomenon is known as the Uncanny Valley and has influenced visual design of fake people in films, robots, games etc.

For a time, the recent crop of image generators and LLMs fell into the same boat. Twisting people with the wrong number of fingers or teeth was a common source of derision. People are still puzzling over chatbots that can speak very coherently and yet make wild mistakes with none of the inner light you might expect from a real conversationalist.

Now, or at least very soon, AI threatens to cross that valley and advance up the gentle hills on the opposite side. Not only are we faced with a disinformation storm like nothing before, but AI is going to start challenging exactly how we consider personhood itself.

This is something we need to fight, in addition to all the other worries about AI. I don’t want to get into philosophical weeds about whether LLMs could be considered moral patients. But I think our society and thinking are structured around a clear human/​non-human divide. Chatbots threaten to unravel that.

Losing Connection

The most common use of AI chatbots is as search engines and task doers. But we are also seeing them start to replace human interpersonal relationships. This has started with translators and call centre staff and is trending towards valets, confidantes, virtual romantic relationships and therapists.

But chatbot relationships are not perfectly equivalent to our human ones. People will not tolerate character flaws from a tool. Even traits like independence of thought, disagreement, and personality quirks are likely to be ironed out in the process of trying to make your chatbot the most popular.

My model is something like junk food. Food scientists can now make food hyper-palatable. Optimization for that often incidentally drives out other properties like healthiness. Similarly, we’ll see the rise of junk personalities – fawning and two-dimensional, without presenting the same challenges as flawed real people. As less and less of our lives are spent talking to each other, we’ll stop maintaining the skill or patience to do so.

The changes the internet has caused show these things can occur pretty quickly, and also how damaging it can be. In a sense, the move to anonymized, short-form text-based communication had a similar effect to what I’m worrying about. You see less empathy/​charity to outsiders which leads to less willingness to engage in good-faith arguments and less consensus on topics or even on basic facts. Discourse has suffered, and I think also democracy. When we are used to the more one-sided, sycophantic relationships that typically result from commercially motivated chatbots, we’ll see a further slide in this direction[1].

Even aside from the public forum, I personally wouldn’t consider it a win for humanity if we retreat to isolating cocoons that satisfy us more than interacting with other people.

Having a clear distinction between human and AI can help ensure people learn that two different modes of communication are necessary, helping preserve our connection to each other. And it can surface the subtle societal changes as they happen.

Returning to the junk food model, we don’t ban unhealthy food for various reasons, but we do have requirements about clarity in labelling.

False Friends

Simultaneously with humans learning to respect their fellow humans less, we’ll see the rise of respect for AI as we spend more of our life with them, and become more dependent on them.

This makes it harder to evaluate them without bias. Right now, we’re facing some tough questions about the understanding and sentience of AI. The more we personify them, the more we’ll be inclined to judge AI favourably. We’ve seen from Eliza and various Turing Test winners quite how easily influenced we are, once something starts to resemble a human. I think this likely influenced Blake Lemoine’s assertions about AI – he asked it about sentience, rights and so on, got sensible answers, and extended the benefit of the doubt.

That loss of impartiality also means creating policy and law will become increasingly difficult which has important consequences for other severe dangers of AI that we need to address.

Proposal

So what can be done about it? In my opinion, our built-in mechanism of the uncanny valley gives us a hint. Let’s make intelligent machines to act as agents, arbitrators, and aides. But they should be impossible to confuse with a real person.

Nowadays, animation studios are perfectly capable of creating photorealistic people from scratch. They do use this capability to replace actors in critical shots and for other conveniences. But by and large, they don’t create full films this way. Instead, fully CGI films featuring humans are stylized. Exaggerated features and interesting aesthetics usually serve the narrative better than pure reality.

We can do the same for AI. For a chatbot, why not give them the speech patterns of a fusty butler, like C3-P0? Or that autotuned audio burr when speaking that we use to signify a robot. OpenAI already embraces this, likely accidentally. Their safety teams have RLHF’d ChatGPT’s responses to an easily recognizable hedging corporate lingo.

I don’t think it’s even a difficult position to push for. Commercial AI Labs are not trying to make bots that behave exactly like humans, they want them to be more reliable, considerate and compliant. At some point “human-like” becomes bad branding. And it’s not like you need 100% coverage – what I’m worried about is the general association we have with AI, so stylizing is only needed for a handful of popular mass-market products to see the majority of the effect.

It’s a simple ask and it leads the world in helpful directions. Let’s get some style guidelines going before we truly lose the ability to distinguish.

  1. ^

    The use of AI for spam and scams will also train us to trust no one and shut out strangers, which if anything is a stronger effect than the positive connections I’m talking about. But I don’t see that being fixed by the proposals of this essay.

No comments.