Archives

DeNet: A Deepfake Visual Media Detection Network


Ranjith Kumar M. V, Ankit Prabhu, Shubham Asthana, Sharan Jaiswal, Madhavan P
Abstract

For the past few years, a new deep-learning based fake visual-media generating or manipulating technique, Deepfake, is used to create fake videos or images of celebrities and politicians, with the potential to deceive the human eye to differentiate between fake and genuine visual media. These adulterated or morphed deepfake media might be used against the target individual for unethical advantages. Deep Neural Networks using Generative Adversarial Networks (GANs) replaces the facial expressions of the target individual by the facial expressions of the donor individual to create forged media. Authenticity of visual media can be effectively tested by the use of deep learning models. Our paper proposes a new method called DeNet in which we train our own Convolutional Neural Network (CNN) on extracted faces from the deepfake dataset for facial features extraction and predicting whether the face provided is a deepfake or an original face. The experimental results during the testing of our trained neural network were impressively satisfying when compared with existing architecture(s). The intent of our work is to limit the consequences of the adversities instigated by deepfake media.

Volume 12 | Issue 2

Pages: 792-799

DOI: 10.5373/JARDCS/V12I2/S20201098