When critics forecast the terrible future of artificial intelligence, they generally focus on the concept of AI “slop” that will overwhelm us with low-quality facsimiles of real human work. But what about when AI can do things so well that they’re indistinguishable from the real thing?
A new paper published in the journal Royal Society Open Science reveals where we are with AI-generated stills of human faces: They are now essentially indistinguishable from real photos. Worryingly, the paper shows that this can be true even for people who have been explicitly trained to find the telltale signs of AI generation.
The study was simple enough. Researchers in the United Kingdom gathered a control group of normal people and an experimental group of “super-recognizers” who have shown a high aptitude for facial recognition, but crucially have not received training in finding AI faces.
The control group correctly identified only 30% of photos as real or fake, which is significantly below chance. The super-recognizers also performed below chance, with a success rate of 41%.
Example stimuli from the experiment. Female synthetic faces (a), male synthetic faces (b), female real faces (c) and male real faces (d). The final row (e) contains some of the synthetic images used in the training task, each of which has rendering artefacts that were highlighted to participants. For example, the first image has hair that is poorly rendered, and the second image has three, rather than four incisors.
Credit: Katie L. H. Gray, et al. / University of Reading
That’s not a great score, especially for the people who have previously demonstrated an ability to spot AI-generated faces. But then the researchers gave both groups just five minutes of training in spotting AI fakes. They highlighted features such as a tooth in the center of the mouth, oddly shaped hairlines, and unnatural skin texture, providing visual examples to guide participants on what to look for. Participants also received feedback on their initial guesses, enabling them to adjust their guessing strategy during the real test.
Upon the second run, both post-training cohorts improved markedly. Regular participants were able to identify about 51% of faces, while the super-recognizers jumped to 64%.
These participants were primed to question the provenance of each photo, which raises the question: How likely would these same people be to detect a fake without having been told in advance to suspect it?
Another interesting wrinkle is the rate at which participants incorrectly said real faces were fake. This suggests that AI can not only pass off fake faces as real but also undermine our ability to fully trust one another.
Worse, some research suggests that AI-generated faces could appear more trustworthy than real ones, which the researchers attribute to the fact that “synthesized faces tend to look more like average faces, which themselves are deemed more trustworthy.”
It’s, of course, only a start, since AI still can’t put these faces into motion without alerting the user to the fakery. AI facial animation applets usually start with real photos, but still can’t generate motion that looks convincing.

