/ˈdiːpfeɪk/
The term "deepfake" combines "deep learning" and "fake," referring to AI-generated synthetic media that manipulates or fabricates visual content. Deepfakes typically use neural networks to swap faces in videos or images, or to generate entirely fictional but photorealistic people.
While the technology has legitimate uses in entertainment and research, it poses significant risks for misinformation, fraud, non-consensual intimate imagery, and identity theft.
Deepfakes are increasingly used for non-consensual intimate imagery, political misinformation, financial fraud, and harassment. Detection technology is crucial for platforms to protect users from synthetic media abuse.
Most deepfakes use Generative Adversarial Networks (GANs) or autoencoders. These AI systems are trained on images/videos of the target person, learning to generate new realistic content. As the technology improves, deepfakes become increasingly difficult to distinguish from authentic media.
Detection methods analyze artifacts like unnatural blinking, lighting inconsistencies, audio-visual mismatches, and compression anomalies. AI-based detectors are trained to identify telltale signs of synthetic generation that may be imperceptible to humans.