Facebook Working Actively on Deepfake
Deepfakes aren’t a major issue on Facebook right now, but the firm is continuing to invest in research to protect against future dangers. Its most recent project is a cooperation with Michigan State University (MSU) academics, with the united team developing a method for reverse-engineering deepfakes, which involves evaluating AI-generated pictures to discover distinguishing traits of the machine learning model that developed it.
Track Criminal Actors
The research is valuable because it could aid Facebook in tracking down criminal actors who are distributing deepfakes across its many social media platforms. This content could include misinformation as well as non-consensual pornography, which is an all-too-common use of deepfake technology. The work is currently in the research stage and is not yet ready for deployment.
Hyperparameters of Models
Previous research in this area has been able to establish which known AI model generated a deepfake, but new study, lead by MSU’s Vishal Asnani, goes a step further by recognising the architectural characteristics of unknown models. These characteristics, referred to as hyperparameters, must be fine-tuned in each machine learning model like engine parts. They leave a unique fingerprint on the final image, which may subsequently be used to identify its source.
Because deepfake software is relatively easy to adapt, Facebook research lead Tal Hassner tells The Verge that identifying the properties of unknown models is critical. If investigators were seeking to track down bad actors, this could allow them to hide their tracks.
How this works?
“Assume a bad actor creates a large number of different deepfakes and distributes them to different users on various platforms,” Hassner explains. If this is a brand-new AI model that no one has seen before, there isn’t much we could say about it in the past. Now we can say, “Look, the photo that was posted here came from the same model as the picture that was uploaded there.” We’ll be able to say, “This is the culprit,’ if we can confiscate the laptop or computer.”
Hassner compares the work to forensic techniques that seek for patterns in an image to determine which kind of camera was used to take it. “However, not everyone can build their own camera,” he argues. “On the other hand, anyone with a fair amount of skill and a normal computer can create their own deepfake model.”
Check Deepfake or not?
Not only can the resulting algorithm detect the characteristics of a generative model, but it can also tell which known model made an image and whether it is a deepfake in the first place. “We get state-of-the-art outcomes on standard benchmarks,” Hassner explains.
It’s crucial to remember, though, that even these cutting-edge results aren’t always accurate. Last year, the top algorithm in a Facebook deepfake detection competition was only able to recognize AI-manipulated films 65.18 percent of the time. Deepfake detection using algorithms, according to the researchers, is still a “unsolved problem.”
Problems
Part of the reason for this is that generative AI is a very active field. Every day, new techniques are released, making it practically impossible for any filter to stay up.
Those in the field are well aware of this dynamic, and Hassner agrees when asked if sharing this new fingerprinting methodology will lead to research that is undetectable by current methods. He says, “I would expect so. It’s been a cat and mouse game for a long time, and it’s still a cat and mouse game.”