As if OpenAI’s claim this week that artificial intelligence (AI) may already be gaining consciousness wasn’t futuristic enough, researchers from Lancaster University are now saying that a generative adversarial neural network dubbed StyleGAN2 is capable of creating faces that are not only indistinguishable from the real thing but they are also so sophisticated that many people think they are more trustworthy than actual humans.
“Our evaluation of the photo realism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable — and more trustworthy — than real faces,” Dr Sophie Nightingale from Lancaster University and Professor Hany Farid from the University of California, Berkeley, said in a press release.
As part of the study, the researchers asked 223 participants to rate 128 faces taken from a set of 800 as either real or synthesized. The faces were to be rated on a scale of one to seven for trustworthiness, with one being least trustworthy. On average, the synthetic faces were perceived as 7.7% more trustworthy than the average rating for real faces which researchers say is statistically significant.
Of course, due to the realism and the ability deep fakes have to scramble our understanding of truth in multiple ways, the privacy implications and the harm this technology has to impose on individuals and society is certainly worrisome.
“Perhaps most pernicious is the consequence that in a digital world in which any image or video can be faked, the authenticity of any inconvenient or unwelcome recording can be called into question,” the researchers said, noting that to protect the public from deepfakes, safeguards such as “incorporating robust watermarks into the image- and video-synthesis networks that would provide a downstream mechanism for reliable identification” must be implemented.
h/t Futurism
Leave a Reply