International Journal For Multidisciplinary Research

E-ISSN: 2582-2160     Impact Factor: 9.24

A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal

Call for Paper Volume 8, Issue 3 (May-June 2026) Submit your research before last 3 days of June to publish your research paper in the issue of May-June.

Explainable AI (XAI) for Spam Detection: The “Lens” Approach

Author(s) Mr. Raghul Sachin R, Mr. Arun Karthik V, Prof. Dr. Niveditha S
Country India
Abstract This paper presents a methodology for transparent spam detection based on the use of a Random Forest classifier and Explainable Artificial Intelligence (XAI). We first confirm the classifier’s high predictive accuracy and then address the issue of black-box opacity by implementing Local Interpretable Model-agnostic Explanations (LIME) for semantic and contextual transparency. Our method highlights the major issue of false positives, which is quite challenging in the case of machine learning models. Our framework starts by training a Random Forest to identify the raveled textual distribution of a standard SMS and email spam dataset, balanced via SMOTE. The heart of our self-adaptive method is a Human-in-the-Loop (HITL) override feature within this transparent decision space. Giving the models reasoning, which identifies the precise linguistic cues
leading to a spam prediction, helps to find the best context specific whitelist rule. This way misclassified emails are not only saved but the filter also gets refined, hence, this is a great use of explainable models for real, reliable cybersecurity.
Keywords Explainable AI, LIME, Spam Detection, Human-in-the-Loop, Random Forest, Cybersecurity
Field Computer > Artificial Intelligence / Simulation / Virtual Reality
Published In Volume 8, Issue 3, May-June 2026
Published On 2026-05-08

Share this