A new study reveals we can no longer tell AI-generated faces from real human faces

The faces people rate as most "human-like" are now AI-generated, not real people.
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

There is a website called This Person Does Not Exist where you can see AI-generated photos of people who have never lived. Their faces are as textured and complex as any real face. Their portrait tells the story of a life. But behind the picture, there is no life. Their nose has never smelled wood smoke, and their lips have never been kissed. The wrinkles marking their faces are not time-earned, and those eyes betray no wisdom. Their face is a simulacrum — an imitation of existence.

The problem, as French philosopher Jean Baudrillard understood, is that over time, a simulacrum may replace the reality it supposedly represents. So, too, with AI faces. We know these people do not exist only because they occupy a website called This Person Does Not Exist. If their faces appeared as Facebook profiles, Tinder messages, or sports journalists, we’d be none the wiser.

For so long, computer-generated faces have been walking the uncanny valley. They look passably human, but not quite right. Millions of years of evolution have trained us to recognize the unnatural and the deviant. AI faces are wrong in a way we can’t often spell out.

According to a new paper published recently in Psychological Science, this is all over.

Dubious eyes

It has been known for a while that AI-generated faces are often interpreted as more human-like than actual human faces. What interested Miller et al. was that this is not true across all types of faces. Notably, white AI-generated faces tend to be perceived by people as less uncanny or more realistic than AI-generated faces of other racial backgrounds. In other words, existing AI software tends to be better at making hyperrealistic white faces than black, Asian, or Hispanic. It isn’t entirely clear why.

This prompted Miller et al. to investigate what aspects of AI photos, in general, lead us to perceive them as realistic or not. If we are moving into a world of increasing reliance on AI, not least in sectors like entertainment or law enforcement, then understanding these biases will be more crucial than ever. The study aimed to explore not only AI’s technological capabilities but also its societal and psychological impact on everyday life.

Rate this face

The experiment involved 610 human participants, with an average age of 35 years, who rated AI and human faces based on one of 14 attributes. They were told to say how happy the faces seemed or how attractive they were. They were to say if they thought the eyes seemed “alive” or more “uncanny valley.” And, of course, they had to say whether they thought the portrait was AI or real. Each participant was presented with 100 faces individually, in a random order. Participants then rated their confidence in their guess on a scale from 0 (not at all confident) to 100 (completely confident).

The result led to three notable conclusions. The first is that, in general and on average, participants could not tell the difference between real faces and AI faces. In fact, one of the most striking findings was that the top three “most humanlike” faces, as judged by the participants, were actually AI-generated. Second, white AI faces were more likely to be judged as human-like than the real white human faces. Other racial groups did not reveal a similar bias towards AI faces.

Finally, in an observation repeated often in these kinds of studies, those who were worst at distinguishing between AI and real people were the most confident in their judgments. Self-confidence often correlates with incompetence.

Biases reflecting biases

The study by Miller et al. has far-reaching implications beyond technology and psychology and into the real world. The fact that there are still racial biases in AI-generated faces underscores the need for more diverse training data in the development of AI. If AI is trained on human data and human data is biased, then of course AI will be biased. The problem, though, is that by reflecting existing inequalities, AI risks exacerbating them if left unaddressed. In fields heavily reliant on AI, such as facial recognition for security purposes, it matters that white faces are differentially recognized compared to other racial groups.

The study also sheds light on the psychology of how we interact with AI and, especially, the overconfidence we have in our ability to tell real from artificial. If our news feeds and social media pages are saturated with deepfakes and AI-generated content, then being able to tell what is true or not becomes nearly impossible. We can no longer so easily say, “Oh, I can just tell it’s AI.” Most of the time, you won’t be able to, and the more confident you are in your ability to do so, the even worse you’ll tend to be.

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
Beyond screen time: Rethinking kids’ tech use with the “Goldilocks hypothesis”
The “Goldilocks hypothesis” asks parents to think beyond screen time and consider the habits that teens build around technology use.
Oxytocin’s effects aren’t just about love
At last, neuroscientists are learning how the hormone shapes social behaviors such as pair-bonding and parental care. It’s more complicated than they thought.
Psychedelic drugs and the law: What’s next?
The push to legalize magic mushrooms, MDMA, LSD, and other hallucinogens is likely to heighten tensions between state and federal law.
How much stress is too much? A psychiatrist explains
Some stress is good for you, but toxic stress, on the other hand, wears down your stress response system in ways that have lasting effects.
Trapped in routine? Here’s how to “dishabituate” and rediscover joy
Neuroscientist Tali Sharot shares two ways to “dishabituate” yourself from your routine: take a break or make a change.
Up Next
A shadowy figure standing in front of a radiator, symbolic of the silent struggles often associated with mental health.
Subscribe to Freethink for more great stories