International Journal For Multidisciplinary Research

E-ISSN: 2582-2160     Impact Factor: 9.24

A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal

Call for Paper Volume 7, Issue 6 (November-December 2025) Submit your research before last 3 days of December to publish your research paper in the issue of November-December.

Adversarial Attacks and Defense in AI Systems: A Review of Cybersecurity Problems and New Protection

Author(s) Mr. Ayotunde Oyatomi
Country United States
Abstract Healthcare, finance, defense, and the Internet of Things are all critical infrastructures that have become heavily dependent on artificial intelligence (AI), enabling them to analyze incoming data and make decisions within seconds, resulting in previously unprecedented efficiency and creativity. However, the application of AI in high-stakes scenarios also exposes the systems to malicious attacks, which exploit its data-driven learning processes. The review considers current research published between 2018 and 2025, indexed by Scopus, IEEE Xplore, Springer, and the ACM Digital Library, to systematically review the state of the art in adversarial attacks and defense mechanisms on AI systems. The report highlights the rising levels of sophistication and practical use of adversarial threats, particularly in autonomous vehicles, facial recognition, and medical diagnostics, by categorizing them into four key areas: evasion, poisoning, model inversion, and backdoor attacks. The outcomes of a critical assessment of defense strategies, such as adversarial training, robust optimization, input sanitization, anomaly detection, and ensembles, demonstrate that no single defense strategy can ensure total security. The discussion situates these findings within the broader context of cybersecurity, highlighting the challenges of scalability, the limitations of accepted assessment processes, and the ethical concerns surrounding accountability and compliance. Emerging technologies, such as blockchain, federated learning, and quantum-resistant defenses, hold promise in enhancing resilience. To secure AI systems and ensure their safe and trustworthy implementation into critical infrastructures, the study concludes that clarifying, adaptive, and transdisciplinary strategies are necessary.
Keywords Adversarial machine learning, AI security, cybersecurity, adversarial defense, deep learning, robustness
Field Computer > Artificial Intelligence / Simulation / Virtual Reality
Published In Volume 7, Issue 6, November-December 2025
Published On 2025-12-02
DOI https://doi.org/10.36948/ijfmr.2025.v07i06.57627
Short DOI https://doi.org/hbdrfg

Share this