Facebook is rolling out an artificial intelligence that it claims can find deepfake images and even reverse-engineer them to figure out how they were made as well as perhaps trace their creators.
Deepfakes are wholly artificial images created by an AI. Facebook’s new AI talks about similarities among a collection of deepfakes to see if they have a shared origin, looking for unique patterns such as for example small speckles of noise or slight oddities in the colour spectrum of a graphic.
By identifying the minor fingerprints within an image, Facebook’s AI has the capacity to discern details of the way the neural network that created the image was designed, such as for example what size the model is or how it had been trained.
“I thought there’s no way this is likely to work,” says Tal Hassner at Facebook. “How would we, simply by looking at an image, manage to tell just how many layers a deep neural network had, or what loss function it had been trained with?”
Read more: How will you tell if a video is a deepfake? Just look at the eyes
Hassner and his colleagues tested the AI on a database of 100,000 deepfake images made by 100 different generative models making 1000 images each. Some of those images were used to train the model, while some were held back and presented to the model as images of unknown origin.
That helped test the AI in its ultimate goal. “What we’re doing is looking at a image and trying to estimate what’s the design of the generative model that created it, whether or not we’ve never seen that model before,” says Hassner. He declined to talk about how accurate the AI’s estimates were, but says “we’re way better than random”.
“It’s a big step of progress for fingerprinting,” says Nina Schick, writer of Deep Fakes and the Infocalypse . But she highlights – as do Hassner and his colleagues – that the AI only works on images that contain been fully artificially generated, even though many deepfakes are videos created by pasting one face on to someone else’s body.
Schick also wonders how effective the AI would be outside lab environments, encountering deepfakes in the “wild”. “The sort of face detection models we see are broadly based on academic data sets and are deployed in handled environments,” she says.
Hassner declined to talk about how precisely Facebook would be using its new AI, but says that kind of work is a cat-and-mouse game against persons creating deepfakes. “We’re developing better identifying models while others are developing better and better generative models,” he says. “I don’t doubt that at some point there’ll be considered a method which will fool us completely.”
More on these topics:
- artificial intelligence