People trust AI fake faces more than real ones, research suggests 

Researchers behind the study are calling for safeguards to prevent deep fakes.
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

Fake faces created by artificial intelligence (AI) are considered more trustworthy than images of real people, a study has found.

The results highlight the need for safeguards to prevent deep fakes, which have already been used for revenge porn, fraud and propaganda, the researchers behind the report say.

Real (R) and synthetic (S) faces were rated for trustworthiness with ‘statistically significant’ results. (Image: PNAS)

Deep fake fears

The study – by Dr Sophie Nightingale from Lancaster University in the UK and Professor Hany Farid from the University of California, Berkeley, in the US – asked participants to identify a selection of 800 faces as real or fake, and to rate their trustworthiness.

After three separate experiments, the researchers found the AI-created synthetic faces were on average rated 7.7% more trustworthy than the average rating for real faces. This is “statistically significant”, they add. The three faces rated most trustworthy were fake, while the four faces rated most untrustworthy were real, according to the magazine New Scientist.

AI learns the faces we like

The fake faces were created using generative adversarial networks (GANs), AI programmes that learn to create realistic faces through a process of trial and error.

The study, AI-synthesized faces are indistinguishable from real faces and more trustworthy, is published in the journal, Proceedings of the National Academy of Sciences of the United States of America (PNAS).

It urges safeguards to be put into place, which could include incorporating “robust watermarks” into the image to protect the public from deep fakes.

Guidelines on creating and distributing synthesized images should also incorporate “ethical guidelines for researchers, publishers, and media distributors,” the researchers say.

The four most (top row) and four least (bottom row) trustworthy faces, according to the study.
(Image: PNAS)

Ethical AI tools

Using AI responsibly is the “immediate challenge” facing the field of AI governance, the World Economic Forum says.

In its report, The AI Governance Journey: Development and Opportunities, the Forum says AI has been vital in progressing areas like innovation, environmental sustainability and the fight against COVID-19. But the technology is also “challenging us with new and complex ethical issues” and “racing ahead of our ability to govern it”.

The report looks at a range of practices, tools and systems for building and using AI.

These include labelling and certification schemes; external auditing of algorithms to reduce risk; regulating AI applications, and greater collaboration between industry, government, academia and civil society to develop AI governance frameworks.

Republished with permission of the World Economic Forum under a Creative Commons license. Read the original article.

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
Should we turn the electricity grid over to AI?
AI could one day be woven throughout the grid management system — here are the pros and cons.
AI skeptic Gary Marcus on AI’s moral and technical shortcomings
From hallucinations to regulatory battles, Gary Marcus argues the AI status quo has failed us and it’s time citizens demand something more.
Can humans purge the bots without sacrificing our privacy?
A group backed by Sam Altman is pursuing the creation of “personhood credentials” that would prove an internet user is a real person.
Flexport is using generative AI to create the “holy grail” of shipping
Flexport is using generative AI to read documents, talk to truckers, and create a “knowledge agent” that’s an expert in shipping.
The West needs more water. This Nobel winner may have the answer.
Paul Migrom has an Emmy, a Nobel, and a successful company. There’s one more big problem on the to-do list.
Up Next
Subscribe to Freethink for more great stories