Grant / February 2023

Digital Fingerprinting to Protect Against Deepfakes

As multimedia content on the internet continues to explode, its veracity and authenticity are increasingly being called into question. Deepfakes are synthetically generated multimedia content that replaces a person’s face or voice in ways that are becoming imperceptible to humans. The use of deep-fakes for malicious purposes of identity fraud, spreading misinformation, and creating social unrest poses a significant cybersecurity threat.  This project aims to develop a method to establish whether a video containing a person of interest is digitally manipulated or faked by using an “identity fingerprint” that incorporates multimodal authenticity checks. The fingerprint will include features that characterize the idiosyncratic facial, gestural, and vocal mannerisms of the person and can be used in models that detect deep-fakes generated using a variety of techniques. We expect this work to have a positive impact by increasing the trust and safety of multimedia content.

The researchers submitted a paper on this project to the IEEE Workshop on Information Forensics and Security (WIFS) 2023. A preprint of the paper and the supporting codebase is available below.