International Journal For Multidisciplinary Research
E-ISSN: 2582-2160
•
Impact Factor: 9.24
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Home
Research Paper
Submit Research Paper
Publication Guidelines
Publication Charges
Upload Documents
Track Status / Pay Fees / Download Publication Certi.
Editors & Reviewers
View All
Join as a Reviewer
Get Membership Certificate
Current Issue
Publication Archive
Conference
Publishing Conf. with IJFMR
Upcoming Conference(s) ↓
WSMCDD-2025
GSMCDD-2025
AIMAR-2025
ICICSF-2025
IC-AIRCM-T³
Conferences Published ↓
SVGASCA (2025)
ICCE (2025)
RBS:RH-COVID-19 (2023)
ICMRS'23
PIPRDA-2023
Contact Us
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 7 Issue 6
November-December 2025
Indexing Partners
Adversarial Attacks and Defense in AI Systems: A Review of Cybersecurity Problems and New Protection
| Author(s) | Mr. Ayotunde Oyatomi |
|---|---|
| Country | United States |
| Abstract | Healthcare, finance, defense, and the Internet of Things are all critical infrastructures that have become heavily dependent on artificial intelligence (AI), enabling them to analyze incoming data and make decisions within seconds, resulting in previously unprecedented efficiency and creativity. However, the application of AI in high-stakes scenarios also exposes the systems to malicious attacks, which exploit its data-driven learning processes. The review considers current research published between 2018 and 2025, indexed by Scopus, IEEE Xplore, Springer, and the ACM Digital Library, to systematically review the state of the art in adversarial attacks and defense mechanisms on AI systems. The report highlights the rising levels of sophistication and practical use of adversarial threats, particularly in autonomous vehicles, facial recognition, and medical diagnostics, by categorizing them into four key areas: evasion, poisoning, model inversion, and backdoor attacks. The outcomes of a critical assessment of defense strategies, such as adversarial training, robust optimization, input sanitization, anomaly detection, and ensembles, demonstrate that no single defense strategy can ensure total security. The discussion situates these findings within the broader context of cybersecurity, highlighting the challenges of scalability, the limitations of accepted assessment processes, and the ethical concerns surrounding accountability and compliance. Emerging technologies, such as blockchain, federated learning, and quantum-resistant defenses, hold promise in enhancing resilience. To secure AI systems and ensure their safe and trustworthy implementation into critical infrastructures, the study concludes that clarifying, adaptive, and transdisciplinary strategies are necessary. |
| Keywords | Adversarial machine learning, AI security, cybersecurity, adversarial defense, deep learning, robustness |
| Field | Computer > Artificial Intelligence / Simulation / Virtual Reality |
| Published In | Volume 7, Issue 6, November-December 2025 |
| Published On | 2025-12-02 |
| DOI | https://doi.org/10.36948/ijfmr.2025.v07i06.57627 |
| Short DOI | https://doi.org/hbdrfg |
Share this

E-ISSN 2582-2160
CrossRef DOI is assigned to each research paper published in our journal.
IJFMR DOI prefix is
10.36948/ijfmr
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.