Humans Perceive AI-Generated Faces To Be More Trustworthy Than Real Ones
By Mikelle Leow, 16 Feb 2022
The notion that looks are deceiving hasn’t been truer than now, when synthetic faces that look just like someone’s aunt exist.
A startling new study of over 700 participants shows that people are beginning to have trouble telling real faces apart from machine-generated ones, and they’re even rating the non-existent figures as “more trustworthy” than their human counterparts.
The research comprised three groups: 315 of them were asked to pick out real from synthetic faces, 219 were given training to tell such faces apart, and 223 were asked to rate the perceived trustworthiness of the faces. Images shown were generated by artificial intelligence trained on a dataset of real photos portraying Black, East Asian, South Asian, and white men and women.
The first set of subjects pointed out the AI-generated pictures with only 48.2% accuracy, whereas the second group, the people who got training, performed slightly better and attained 59%.
The results from the final group—who was asked to judge faces based on trustworthiness—might explain why the other subjects had done so mediocre: While real faces scored an average rating of 4.48 in trustworthiness, AI-generated faces somehow appeared more “trustworthy” and yielded a marginally higher score of 4.82.
To be clear, artificial intelligence hasn’t mastered the art of faking faces. The researchers, who published their findings in the Proceedings of the National Academy of Sciences USA journal, highlighted that participants were still able to adamantly identify some of the photos as computer-generated.
Still, the scientists were appalled at how the subjects fared during their tests.
“We initially thought that the synthetic faces would be less trustworthy than the real faces,” comments Sophie Nightingale, one of the study’s co-authors.
As technology progresses, there’s a deep fear that synthetic faces could increasingly be “used for nefarious purposes,” warns co-author Hany Farid of the University of California, Berkeley. Given the response from the study, he wouldn’t be surprised if deceptions turn out to be “highly effective.”
The internet is no stranger to disinformation ploys and deepfaked porn fueled by such imagery, and experts are hoping that more can be done to help the general public separate truth from fiction.
Sam Gregory, director of programs strategy and innovation at deepfake-focused human rights organization WITNESS, who was not involved in the experiment, told Scientific American that detection tools are imperative as AI-generated imagery can be very convincing and “the public always has to understand when they’re being used maliciously.”
[via Scientific American and Gadgets 360, cover photo 194752647 © Max Mahey | Dreamstime.com]