Deepfake Detection using Deep Learning Technique based on GAN
Keywords:
Voice Conversion, Speech Synthesis, Puppetmaster, Lip-Synching, Face Swap, Deep Learning, Deepfakes, Artificial IntelligenceAbstract
Sound Forge, Available at: https://www.magix.com/gb/music/sound-forge/. Accessed: January 11, 2021. J. F. Boylan, “Will deep-fake technology destroy democracy?,” The New York Times, Oct, vol. 17, 2018. C. Chan, S. Ginosar, T. Zhou, and A. A. Efros, “Everybody Dance Now,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 5933-5942. K. M. Malik, H. Malik, and R. Baumann, “Towards vulnerability analysis of voice-driven interfaces and countermeasures for replay attacks,” in 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), 2019, pp. 523-528: IEEE. K. M. Malik, A. Javed, H. Malik, and A. Irtaza, “A light-weight replay detection framework for voice controlled iot devices,” IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 5, pp. 982-996, 2020. A. Javed, K. M. Malik, A. Irtaza, and H. Malik, “Towards protecting cyber-physical and IoT systems from single-and multi-order voice spoofing attacks,” Applied Acoustics, vol. 183, p. 108283, 2021. M. Aljasem et al., “Secure Automatic Speaker Verification (SASV) System through sm-ALTP Features and Asymmetric Bagging,” IEEE Transactions on Information Forensics Security, 2021. L. Verdoliva, “Media forensics and deepfakes: an overview,” arXiv preprint arXiv:2001.06564, R. Tolosana, R. Vera-Rodriguez, J. Fierrez, A. Morales, and J. Ortega-Garcia, “Deepfakes and beyond: A survey of face manipulation and fake detection,” arXiv preprint arXiv:2001.00179, 2020. T. T. Nguyen, C. M. Nguyen, D. T. Nguyen, D. T. Nguyen, and S. Nahavandi, “Deep Learning for Deepfakes Creation and Detection,” arXiv preprint arXiv:1909.11573, 2019. Y. Mirsky and W. Lee, “The Creation and Detection of Deepfakes: A Survey,” arXiv preprint arXiv:2004.11138, 2020. L. Oliveira, “The current state of fake news,” Procedia Computer Science, vol. 121, no. C, pp. 817- 825, 2017. R. Chesney and D. Citron, “Deepfakes and the New Disinformation War: The Coming Age of Post- Truth Geopolitics,” Foreign Aff., vol. 98, p. 147, 2019. T. P. Gerber and J. Zavisca, “Does Russian propaganda work?,” The Washington Quarterly, vol. 39, no. 2, pp. 79-98, 2016. P. N. Howard and B. Kollanyi, “Bots, StrongerIn, and Brexit: computational propaganda during the UK-EU referendum,” 2016, Art. no. Available at SSRN 2798311. O. Varol, E. Ferrara, C. A. Davis, F. Menczer, and A. Flammini, “Online human-bot interactions: Detection, estimation, and characterization,” in Eleventh international AAAI conference on web and social media, 2017. A. Marwick and R. Lewis, “Media manipulation and disinformation online,” New York: Data Society Research Institute, 2017. R. Faris, H. Roberts, B. Etling, N. Bourassa, E. Zuckerman, and Y. Benkler, “Partisanship, propaganda, and disinformation: Online media and the 2016 US presidential election,” Berkman Klein Center Research Publication, vol. 6, 2017. A. Hussain and S. Menon, The dead professor and the vast pro-India disinformation campaign, Available at: https://www.bbc.com/news/world-asia-india-55232432. Accessed: August 11, 2021. L. Benedictus, “Invasion of the troll armies: from Russian Trump supporters to Turkish state stooges,” The Guardian, vol. 6, p. 2016, 2016. N. A. Mhiripiri and T. Chari, Media law, ethics, and policy in the digital age. IGI Global, 2017. H. Huang, P. S. Yu, and C. Wang, “An introduction to image synthesis with generative adversarial nets,” arXiv preprint arXiv:1803.04469, 2018. Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “Stargan: Unified generative adversarial networks for multi-domain image-to-image translation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8789-8797 S. Suwajanakorn, S. M. Seitz, and I. Kemelmacher-Shlizerman, “Synthesizing Obama: learning lip sync from audio,” ACM Trans. Graph., vol. 36, no. 4, pp. 95:1-95:13, 2017. J. Thies, M. Zollhofer, M. Stamminger, C. Theobalt, and M. Nießner, “Face2face: Real-time face capture and reenactment of rgb videos,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2387-2395. O. Wiles, A. Sophia Koepke, and A. Zisserman, “X2face: A network for controlling face generation using images, audio, and pose codes,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 670-686. B. Paris and J. Donovan, “Deepfakes and Cheap Fakes,” United States of America: Data & Society, 2019. Faceswap: Deepfakes software for all, Available at: https://github.com/deepfakes/faceswap. Accessed: September 08, 2020. DeepFaceLab, Available at: https://github.com/iperov/DeepFaceLab. Accessed: August 18, 2020. A. Siarohin, S. Lathuilière, S. Tulyakov, E. Ricci, and N. Sebe, “First order motion model for image animation,” in Advances in Neural Information Processing Systems, 2019, pp. 7137-7147.
Downloads
References
Sound Forge, Available at: https://www.magix.com/gb/music/sound-forge/. Accessed: January 11, 2021.
J. F. Boylan, “Will deep-fake technology destroy democracy?,” The New York Times, Oct, vol. 17, 2018.
C. Chan, S. Ginosar, T. Zhou, and A. A. Efros, “Everybody Dance Now,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 5933-5942.
K. M. Malik, H. Malik, and R. Baumann, “Towards vulnerability analysis of voice-driven interfaces and countermeasures for replay attacks,” in 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), 2019, pp. 523-528: IEEE.
K. M. Malik, A. Javed, H. Malik, and A. Irtaza, “A light-weight replay detection framework for voice controlled iot devices,” IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 5, pp. 982-996, 2020.
A. Javed, K. M. Malik, A. Irtaza, and H. Malik, “Towards protecting cyber-physical and IoT systems from single-and multi-order voice spoofing attacks,” Applied Acoustics, vol. 183, p. 108283, 2021.
M. Aljasem et al., “Secure Automatic Speaker Verification (SASV) System through sm-ALTP Features and Asymmetric Bagging,” IEEE Transactions on Information Forensics Security, 2021.
L. Verdoliva, “Media forensics and deepfakes: an overview,” arXiv preprint arXiv:2001.06564,
R. Tolosana, R. Vera-Rodriguez, J. Fierrez, A. Morales, and J. Ortega-Garcia, “Deepfakes and beyond: A survey of face manipulation and fake detection,” arXiv preprint arXiv:2001.00179, 2020.
T. T. Nguyen, C. M. Nguyen, D. T. Nguyen, D. T. Nguyen, and S. Nahavandi, “Deep Learning for Deepfakes Creation and Detection,” arXiv preprint arXiv:1909.11573, 2019.
Y. Mirsky and W. Lee, “The Creation and Detection of Deepfakes: A Survey,” arXiv preprint arXiv:2004.11138, 2020.
L. Oliveira, “The current state of fake news,” Procedia Computer Science, vol. 121, no. C, pp. 817- 825, 2017.
R. Chesney and D. Citron, “Deepfakes and the New Disinformation War: The Coming Age of Post- Truth Geopolitics,” Foreign Aff., vol. 98, p. 147, 2019.
T. P. Gerber and J. Zavisca, “Does Russian propaganda work?,” The Washington Quarterly, vol. 39, no. 2, pp. 79-98, 2016.
P. N. Howard and B. Kollanyi, “Bots, StrongerIn, and Brexit: computational propaganda during the UK-EU referendum,” 2016, Art. no. Available at SSRN 2798311.
O. Varol, E. Ferrara, C. A. Davis, F. Menczer, and A. Flammini, “Online human-bot interactions: Detection, estimation, and characterization,” in Eleventh international AAAI conference on web and social media, 2017.
A. Marwick and R. Lewis, “Media manipulation and disinformation online,” New York: Data Society Research Institute, 2017.
R. Faris, H. Roberts, B. Etling, N. Bourassa, E. Zuckerman, and Y. Benkler, “Partisanship, propaganda, and disinformation: Online media and the 2016 US presidential election,” Berkman Klein Center Research Publication, vol. 6, 2017.
A. Hussain and S. Menon, The dead professor and the vast pro-India disinformation campaign, Available at: https://www.bbc.com/news/world-asia-india-55232432. Accessed: August 11, 2021.
L. Benedictus, “Invasion of the troll armies: from Russian Trump supporters to Turkish state stooges,” The Guardian, vol. 6, p. 2016, 2016.
N. A. Mhiripiri and T. Chari, Media law, ethics, and policy in the digital age. IGI Global, 2017.
H. Huang, P. S. Yu, and C. Wang, “An introduction to image synthesis with generative adversarial nets,” arXiv preprint arXiv:1803.04469, 2018.
Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “Stargan: Unified generative adversarial networks for multi-domain image-to-image translation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8789-8797
S. Suwajanakorn, S. M. Seitz, and I. Kemelmacher-Shlizerman, “Synthesizing Obama: learning lip sync from audio,” ACM Trans. Graph., vol. 36, no. 4, pp. 95:1-95:13, 2017.
J. Thies, M. Zollhofer, M. Stamminger, C. Theobalt, and M. Nießner, “Face2face: Real-time face capture and reenactment of rgb videos,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2387-2395.
O. Wiles, A. Sophia Koepke, and A. Zisserman, “X2face: A network for controlling face generation using images, audio, and pose codes,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 670-686.
B. Paris and J. Donovan, “Deepfakes and Cheap Fakes,” United States of America: Data & Society, 2019.
Faceswap: Deepfakes software for all, Available at: https://github.com/deepfakes/faceswap. Accessed: September 08, 2020.
DeepFaceLab, Available at: https://github.com/iperov/DeepFaceLab. Accessed: August 18, 2020.
A. Siarohin, S. Lathuilière, S. Tulyakov, E. Ricci, and N. Sebe, “First order motion model for image animation,” in Advances in Neural Information Processing Systems, 2019, pp. 7137-7147.
Published
Issue
Section
License
Copyright (c) 2024 International Journal of Scientific Research in Science and Technology
This work is licensed under a Creative Commons Attribution 4.0 International License.