Intel and Intel Labs Develop New AI Methods to Restore Trust in Media

Highlights:

  • Intel Labs’ trusted media research team investigates new approaches to help determine the authenticity of media content.

  • Research areas include using AI and other methods for deepfake detection, deepfake source detection, and media authentication.

  • Intel joins industry leaders to develop standards and combat the rise of media deception and disinformation.

author-image

By

Almost everything we do today is increasingly digital, from working and learning from home to interacting with family and friends. At the same time, our trust in media has eroded due to deliberate deception via disinformation and manipulated content.  Intel Labs’ trusted media research team is stepping in with several initiatives to help users restore trust in media by being able to distinguish between real and fake content.

In partnership with Intel’s security research team and internal business units, Intel Labs is leading and coordinating its Trusted Media research efforts. The team is exploring how to incorporate detection technology and media provenance in Intel products and is looking at how customers can integrate new technologies into their platforms. Two initial research areas include deepfake detection, or the fake production or modification of media using machine learning and AI; and media authentication technology to confirm the validity of the content, among other solutions. 

Intel’s Innovation in Deepfake Detection 

As tools for creating video and animation become more sophisticated and easier to use, users are able to create more realistic fake media.  Take, for example, the deepfake “deeptomcruise” channel on Tiktok. Millions of viewers visit the channel to view fake videos of Tom Cruise. While this may seem harmless, there are several instances where deepfakes are causing harm, including illegal activity, identity theft, forging, and propaganda. 

The popular show on CBS, “60 Minutes,” did a story entitled “Synthetic Media: How deepfakes could soon change our world,” on deepfakes. The report highlights the malicious use of deepfakes for criminal purposes and growing governmental concern. 

Intel is working to combat this issue and build trust by developing algorithms and architectures that determine whether content has been manipulated using AI techniques. Intel has already incorporated deepfake detection technology on the Intel® Xeon® Scalable Processor, which uses algorithms designed by Intel Research Scientist Ilke Demir and the State University of New York Binghamton University Professor Umur Ciftci, called FakeCatcher

The technology uses remote photoplethysmography techniques to look at the subtle “blood flow” in the pixels of an image, examines the signals from multiple frames, and then runs the signatures through a classifier. The classifier determines whether the video in question is real or fake. 

Further Research in Deepfake Detection

Additional novel deepfake research evaluated how to separate deepfakes from real videos and discover how they are generated in the first place. Using deep learning (DL) approaches, the same researchers classify deepfakes’ generators using convolutional neural networks (CNN) that learn the residuals of the generator. The premise is that residuals contain information about the source that can be disentangled with biological signals. The results indicate that this approach detects fake videos with 97.29 percent accuracy and detects the source model behind fake videos with 93.39 percent accuracy.

Additional research includes determining video authenticity using other biological priors. In addition to FakeCatcher for biological signals, the team also looks at several prominent eye and gaze features that deepfakes exhibit differently and analyzes both real and fake videos. Geometric, visual, metric, temporal, and spectral variations are analyzed and generalized into a gaze signature to enable a deep neural network (NN) to classify videos. Using this approach on several deepfake datasets achieved a 92.48 percent accuracy on FaceForensics++, and 99.27 percent accuracy on DeeperForensics datasets, outperforming most deep and biological fake detectors.

Industry Efforts in Deepfake Detection

The industry is also working to develop technologies and approaches to solve these challenges. Intel’s Ilke Demir joined the industry and academic researchers and contributed to a collaborative white paper presented at an exploratory workshop at the Institute for Pure and Applied Mathematics at the University of California, Los Angeles. The paper, entitled, “DEEP FAKERY – An Action Plan,” was written by members of the mathematics, machine learning, cryptography, philosophy, social science, legal, and policy communities. It discusses the impact of deep fakery and how to respond to it.

Intel is also a member of The Coalition for Content Provenance and Authenticity (C2PA), which is focused on addressing misleading information online through the development of technical standards for certifying the source and history (or provenance) of media content. The standards body is an alliance between Adobe, Arm, Intel, Microsoft, and Truepic, which aims to provide publishers, creators, and consumers the ability to trace the origin of different types of media. 

A specification, informed by work conducted through industry organizations including the Project Origin Alliance and the Content Authenticity Initiative (CAI), has been published to enable global adoption of digital provenance techniques by creating secure provenance-enabled applications. The specification attempts to tackle the problem when media is created, however since not all media goes through this certification process, other forms of end-to-end authenticity are needed to determine the validity of the content. 

Conclusion

While a great deal of disinformation today draws on relatively low-fidelity misuses such as mislabeling and mischaracterization, there is an increase in media disinformation that relies on the widespread availability of video editing tools and AI to manipulate and create believable, life-like media. Left unchecked, manipulated media can have a tangibly detrimental impact on today's society.  

Intel is looking to leverage its strengths in algorithm and architecture design to offer solutions that can detect fraudulent media and bolster faith in news media as a reliable source of fact-based events. Be on the lookout for new developments in trusted media as Intel continues to innovate and find ways to provide critical value to our customers and end-users.