Uncanny Valley
"We're all dwelling in uncanny valley now," writes Maureen Dowd in a 4 October New York Times piece referencing AI-generated actor Tilly Norwood. So, what's "uncanny" – and why is it a "valley"?
Coined in 1970 by Japanese roboticist Masahiro Mori (森 政弘), the term “uncanny valley” describes a dip in emotional comfort when something non-human becomes too close to human, without being really human.



So, looking at the graph that accompanied his thinking, from left to right: a robot that looks like (say) WALL-E is cute, a robot that looks vaguely human is fascinating, a robot that looks nearly human, but has dead eyes and strange skin (or six fingers) is eerie and disturbing – followed by a real human being, and we’re back to comfort.
Why the discomfort? There are several overlapping theories.
One is evolutionary psychology, which suggests that we may be hardwired to detect subtle abnormalities in other humans as a survival mechanism – helping us avoid disease, death or deception. Another is category confusion, where our brains struggle to classify a figure as either human or object, and that mismatch causes mental discomfort. A third theory involves an empathy breakdown: when a figure mimics human appearance but lacks natural emotion, fluidity or warmth, it can feel soulless – like a zombie. Finally, there’s the violation of expectations theory, which posits that we instinctively expect human-like faces to move and express themselves in familiar ways, and when they don’t, we feel disturbed.



To avoid plunging into the valley, creators and designers often intentionally limit realism, e.g. in the 2009 film Avatar, the Na’vi were humanoid, but their stylised blue features avoided the valley. James Cameron’s team deliberately chose exaggerated proportions and alien aesthetics to increase empathy and realism without getting too close to human.
So, traditionally, the uncanny valley referred to humanoid robots or animated characters whose realism fell just short, causing emotional unease, i.e. something’s not quite right here. But with today’s rapid advancements in AI-generated imagery, video and voice, we’re entering a more complex territory – one in which the uncanny valley is less about appearance and more about authenticity. AI models can now generate photorealistic faces, seamless video and lifelike speech with astonishing accuracy. What once looked robotic and slightly wrong is now often indistinguishable from reality. We’ve effectively crossed the original uncanny valley in many domains. But instead of resolving the discomfort, this new realism has created a fresh kind of unease: not because something looks off, but because we can’t be sure whether it’s real or fake. This uncertainty is giving rise to what some are calling a “second uncanny valley”.


This erosion of trust has serious implications. In the realm of politics, a single convincing fake video can cause panic or discredit a leader. [Ed: Cue the 2025 film Mountainhead.] In online relationships and scams, fabricated personas can manipulate others emotionally or financially. In art, journalism and history, the boundaries of truth are blurring. The human brain, which evolved to trust what it could see and hear, is now confronted by content that can be entirely fabricated yet visually flawless, e.g. is that really a chimpanzee evading the police on that motor scooter?
At the same time, the rise of synthetic humans forces us to reconsider how we respond to what looks like a person. Should we empathise with a virtual being who displays distress or affection? What happens when we become emotionally attached to something that has no consciousness? And conversely, what happens to our empathy for real people when we grow accustomed to treating lifelike imposters as disposable simulations? All modern questions.
PS: The New York Times piece by Maureen Dowd HERE
Video
Emily Blunt: “Good Lord, we’re screwed.”
Wonderful post Remo, thank you. Well thought out, clearly explained, and spot-on choices of images to illustrate your points. Yesterday I led a panel discussion entitled "Generative AI: Friend or Foe to Writers?" within an online writers' conference, where we were wrestling with similar kinds of questions. How do we navigate through, or with, this new technology? And what is it going to cost us, both individually and collectively? No easy answers available just yet - but thank you for at least wading into this territory!