Close Menu
CrafficCraffic
  • Home
  • News
    • Internet
    • Gaming
  • Tech
    • Hardware
    • Gaming Tech
    • Mobile Phones
    • Software
  • Science
    • Astronomy
    • Discoveries
    • Psychology
  • Entertainment
    • Anime
    • Reviews
    • Spotlight
    • WWE
Facebook X (Twitter) Instagram
CrafficCraffic
  • Home
  • News
    • Internet
    • Gaming
  • Tech
    • Hardware
    • Gaming Tech
    • Mobile Phones
    • Software
  • Science
    • Astronomy
    • Discoveries
    • Psychology
  • Entertainment
    • Anime
    • Reviews
    • Spotlight
    • WWE
Facebook X (Twitter) Instagram
CrafficCraffic
Home » Facebook can Track Source of Deepfake by Reverse Engineering
Internet

Facebook can Track Source of Deepfake by Reverse Engineering

Sudhanshu SharmaBy Sudhanshu SharmaJune 18, 2021Updated:June 18, 2021No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Facebook can Track Source of Deepfakes by Reverse Engineering
Share
Facebook Twitter LinkedIn Pinterest Email

Facebook Working Actively on Deepfake

Deepfakes aren’t a major issue on Facebook right now, but the firm is continuing to invest in research to protect against future dangers. Its most recent project is a cooperation with Michigan State University (MSU) academics, with the united team developing a method for reverse-engineering deepfakes, which involves evaluating AI-generated pictures to discover distinguishing traits of the machine learning model that developed it.

Track Criminal Actors

The research is valuable because it could aid Facebook in tracking down criminal actors who are distributing deepfakes across its many social media platforms. This content could include misinformation as well as non-consensual pornography, which is an all-too-common use of deepfake technology. The work is currently in the research stage and is not yet ready for deployment.

Hyperparameters of Models

Facebook can Track Source of Deepfake by Reverse Engineering

Previous research in this area has been able to establish which known AI model generated a deepfake, but new study, lead by MSU’s Vishal Asnani, goes a step further by recognising the architectural characteristics of unknown models. These characteristics, referred to as hyperparameters, must be fine-tuned in each machine learning model like engine parts. They leave a unique fingerprint on the final image, which may subsequently be used to identify its source.

Because deepfake software is relatively easy to adapt, Facebook research lead Tal Hassner tells The Verge that identifying the properties of unknown models is critical. If investigators were seeking to track down bad actors, this could allow them to hide their tracks.

How this works?

“Assume a bad actor creates a large number of different deepfakes and distributes them to different users on various platforms,” Hassner explains. If this is a brand-new AI model that no one has seen before, there isn’t much we could say about it in the past. Now we can say, “Look, the photo that was posted here came from the same model as the picture that was uploaded there.” We’ll be able to say, “This is the culprit,’ if we can confiscate the laptop or computer.”

Hassner compares the work to forensic techniques that seek for patterns in an image to determine which kind of camera was used to take it. “However, not everyone can build their own camera,” he argues. “On the other hand, anyone with a fair amount of skill and a normal computer can create their own deepfake model.”

Check Deepfake or not?

Facebook Deepfake

Not only can the resulting algorithm detect the characteristics of a generative model, but it can also tell which known model made an image and whether it is a deepfake in the first place. “We get state-of-the-art outcomes on standard benchmarks,” Hassner explains.

It’s crucial to remember, though, that even these cutting-edge results aren’t always accurate. Last year, the top algorithm in a Facebook deepfake detection competition was only able to recognize AI-manipulated films 65.18 percent of the time. Deepfake detection using algorithms, according to the researchers, is still a “unsolved problem.”

Problems

Part of the reason for this is that generative AI is a very active field. Every day, new techniques are released, making it practically impossible for any filter to stay up.

Those in the field are well aware of this dynamic, and Hassner agrees when asked if sharing this new fingerprinting methodology will lead to research that is undetectable by current methods. He says, “I would expect so. It’s been a cat and mouse game for a long time, and it’s still a cat and mouse game.”

Deepfake Facebook Internet News Software Tech
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleZens’ latest powerbank can charge two devices wirelessly at once
Next Article My Hero Academia: World Hero’s Mission movie first trailer released
Sudhanshu Sharma

Related Posts

Gaming

God of War Ragnarok, Greatest of all time?

December 23, 2022
Tech news

the secret of BuzzFeed’s success has a lot to do with the image-heavy

June 14, 2022
Tech news

In Fact certainly influenced Seldes the leftist journalist I.F. Stone

June 14, 2022
Add A Comment

Leave A Reply Cancel Reply

At Craffic we ensure delivering quality content to our readers as they are giving us their precious time to engage with our content. And Craffic was a vision of a group of school friends and they've made it possible by learning the basics of strategies used in the media culture. ‎ ‎ ‎‎ ‎ ‎

Quick Access
  • About Us
  • Contact us
  • Terms of Use
  • Privacy Policy
Facebook X (Twitter) Instagram Pinterest
© 2025 Craffic. Designed by StackX Solutions.

Type above and press Enter to search. Press Esc to cancel.