Deepfake Detection Using ResNeXt-50 and LSTM Neural Networks
Keywords:
deepfake detection, Convolutional Neural Net- works (CNN), ResNeXt-50, Long Short-Term Memory (LSTM), Hybrid deep learning, Temporal feature extraction, Video foren- sics.Abstract
Recent advances in deep learning methods have made it possible to create extremely convincing synthetic media, known by the general public as deepfake videos, that carry enormous dangers such as spreading misinformation, political manipulation, financial scams, and individual blackmail. In this research, we present a system based on deep learning to automate the detection of deepfake videos. In the proposed methodology, we adopt the usage of the ResNeXt-50 architecture to learn the effective frame- level features of the video, which pass through a Long Short-Term Memory (LSTM) network that recognizes the relationships over the whole video. To enhance the generalizability of the model, we adopt a sequential setup with ReLU activation and dropout regularization. To evaluate the model’s effectiveness, we utilize benchmark datasets of deepfakes, and our results show that the model of ResNeXt-50 + LSTM is highly effective in distinguishing between real and manipulated videos. The results of this research provide a significant demonstration of the benefit of fusing multi- dimensional data (i.e., spatial and temporal data) to construct a reliable model to identify deepfakes.
Downloads
References
1. K. Singha, D. Karmakar, A. Ghosh, B. Biswas, and S. Bose, “Deepfake Video Detection Using ResNeXt-101 and LSTM,” IJERT, vol. 13, no. 5,pp. 1–6, 2024.
2. A. Aparna and A. Ladda, “Deep Fake Video Detection using Deep Learning, ResNeXt and LSTM,” International Journal of Information and Electronics Engineering, vol. 15, no. 5, pp. 95–100, 2025.
3. S. Maxmudjanov, A. Primbetov, and A. Naimov, “Deepfake Detection Using a Hybrid ResNeXt and LSTM Architecture,” Al-Farg’oniy Avlod- lari, pp. 50–57, 2025.
4. A. Emaley, “A Face-Centric Deepfake Detection Approach with ResNeXt-50 and LSTMs,” IJRASET, vol. 12, no. 4, pp. 120–125, 2024.
5. V. P. Shrivathsa, “Deepfake Video Detection Using LSTM and XRes- Net,” IJRASET, vol. 11, no. 8, pp. 210–215, 2023.
6. R. V. Raju, S. Janakiram, P. R. Prasad, and B. Lohith, “Deepfake Detection in Images and Videos Using LSTM and ResNeXt CNN,” IJRASET, vol. 13, no. 2, pp. 320–327, 2025.
7. S. Patel, S. K. Chandra, and A. Jain, “DeepFake Videos Detection and Classification Using ResNeXt and LSTM Neural Network,” in IEEE SmartGenCon, pp. 345–350, 2023.
8. J. Wang, X. Li, H. Xu, and Y. Wang, “M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection,” arXiv preprint arXiv:2104.09770, 2021.
9. H. Lin, Y. Zhang, and J. Dong, “Improved Xception with Dual Attention Mechanism for Face Forgery Detection,” arXiv preprint arXiv:2109.14136, 2021.
10. H. Chen, C. Kao, Y. Liu, and C. Kuo, “DefakeHop: A Lightweight High- Performance Deepfake Detector,” arXiv preprint arXiv:2103.06929, 2021.
11. Y. Li, X. Yang, P. Sun, H. Qi, and S. Lyu, “Celeb-DF: A Large-Scale Challenging Dataset for Deepfake Forensics,” in IEEE CVPR, pp. 3207– 3216, 2020.
12. B. Dolhansky, J. Bitton, and C. Ferrer, “The DeepFake Detection Challenge (DFDC) Dataset,” arXiv preprint arXiv:2006.07397, 2020.
13. X. Wang, H. Guo, and S. Hu, “GAN-Generated Faces Detection: A Survey,” arXiv preprint arXiv:2202.07145, 2022.
14. E. Ganjdanesh and M. Sabokrou, “Hybrid CNN-RNN Architectures for Deepfake Video Detection,” in CVPR Workshops, pp. 1–8, 2022.
15. Q. Trinh, T. D. Nguyen, and T. V. Nguyen, “Deepfake Detection Using LSTM on Face Data,” in IEEE ICMEW, pp. 50–55, 2021.
16. G. Petmezas, A. Tefas, and I. Pitas, “Video Deepfake Detection Using Hybrid CNN–LSTM–Transformer,” Multimedia Tools and Applications, vol. 84, no. 2, pp. 1123–1135, 2025.
17. F. Abbas, A. Khan, and M. Qureshi, “A Systematic Review of Deepfake Detection Techniques,” ACM Computing Surveys, vol. 56, no. 3, pp. 1– 30, 2024.
18. F. Alanazi, G. Ushaw, and G. Morgan, “Improving Detection of Deep- Fakes through Facial Region Analysis,” Electronics, vol. 12, no. 5, pp. 1200–1208, 2023.
19. P. Saikia, D. Dholaria, and V. Patel, “A Hybrid CNN-LSTM Model for Video Deepfake Detection,” in IJCNN, pp. 1300–1307, 2022.
20. T. Nguyen, C. Nguyen, and D. Nguyen, “Capsule-Forensics for Detect- ing Manipulated Images and Videos,” in IEEE ICASSP, pp. 2300–2304, 2022.
21. A. Agarwal and R. Farid, “Detecting Deep-Fake Videos from Aural and Visual Inconsistencies,” in IEEE ICASSP, pp. 2500–2504, 2020.
22. A. Tolosana, R. Vera-Rodriguez, and J. Fierrez, “DeepFakes and Be- yond: A Survey of Face Manipulation and Detection,” Information Fusion, vol. 64, pp. 131–148, 2020.
23. Y. Li, M. Chang, and S. Lyu, “Inconsistent Facial Movement Detection in Deepfakes,” in IEEE CVPR Workshops, pp. 4370–4378, 2020.
24. S. Khan, A. Mian, and M. Hayat, “Adversarially Robust Deepfake Media Detection,” arXiv preprint arXiv:2102.05950, 2021.
25. H. Nguyen, J. Yamagishi, and I. Echizen, “Capsule Networks for Detecting Deepfakes and GAN-Generated Faces,” IEEE Transactions on Multimedia, vol. 23, no. 10, pp. 1–10, 2021.
26. T. Mittal, U. Bhattacharya, R. Chandra, A. Bera, and D. Manocha, “Emotions Don’t Lie: A Deepfake Detection Method Using Audio- Visual Affective Cues,” in ACM Multimedia, pp. 2823–2832, 2022.
27. H. Jeon, S. Yoon, and J. Choi, “Detection of AI-Generated Faces via Frequency and Texture Fusion,” IEEE Access, vol. 11, pp. 65820–65832, 2023.
28. Z. Huang, Y. Zhou, and H. Li, “Temporal Attention for Deepfake Video Detection,” in IEEE WACV, pp. 340–349, 2024.
29. P. Zhou, J. Han, and W. Li, “Two-Stream Neural Networks for Tampered Face Video Detection,” Pattern Recognition Letters, vol. 145, pp. 68–75, 2021.
30. L. Verdoliva, “Media Forensics and DeepFakes: An Overview,” IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 5, pp. 910– 932, 2020.
Downloads
Published
Data Availability Statement
In the provided research paper (on deepfake detection with ResNeXt-50 and LSTM), there is no explicit section titled "Data Availability Statement" found on quick search, indicating the authors have not clearly stated within the paper any data sharing or availability details. Usually, if data is available, this statement appears near the end of the paper or in a dedicated section. Since this is absent, it can be concluded that either the data is not publicly shared or the authors omitted specifying its availability in this document.
Issue
Section
License
Copyright (c) 2026 PAVAN KUMAR RODDA (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
All articles published in PUXplore: Multidisciplinary Journal of Engineering are licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license.
Under this license, anyone may read, download, copy, distribute, and share the work for non-commercial purposes, provided that appropriate credit is given to the author(s), the journal, and a link to the license is included.
No adaptations, derivatives, or modifications of the work are permitted without prior written permission from the copyright holder.
Authors retain copyright and grant the journal the right of first publication under this license.