International Journal For Multidisciplinary Research
E-ISSN: 2582-2160
•
Impact Factor: 9.24
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Home
Research Paper
Submit Research Paper
Publication Guidelines
Publication Charges
Upload Documents
Track Status / Pay Fees / Download Publication Certi.
Editors & Reviewers
View All
Join as a Reviewer
Get Membership Certificate
Current Issue
Publication Archive
Conference
Publishing Conf. with IJFMR
Upcoming Conference(s) ↓
Conferences Published ↓
DePaul-2026
IC-AIRCM-T3-2026
SPHERE-2025
AIMAR-2025
SVGASCA-2025
ICCE-2025
Chinai-2023
PIPRDA-2023
ICMRS'23
Contact Us
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 8 Issue 3
May-June 2026
Indexing Partners
Explainable AI (XAI) for Spam Detection: The “Lens” Approach
| Author(s) | Mr. Raghul Sachin R, Mr. Arun Karthik V, Prof. Dr. Niveditha S |
|---|---|
| Country | India |
| Abstract | This paper presents a methodology for transparent spam detection based on the use of a Random Forest classifier and Explainable Artificial Intelligence (XAI). We first confirm the classifier’s high predictive accuracy and then address the issue of black-box opacity by implementing Local Interpretable Model-agnostic Explanations (LIME) for semantic and contextual transparency. Our method highlights the major issue of false positives, which is quite challenging in the case of machine learning models. Our framework starts by training a Random Forest to identify the raveled textual distribution of a standard SMS and email spam dataset, balanced via SMOTE. The heart of our self-adaptive method is a Human-in-the-Loop (HITL) override feature within this transparent decision space. Giving the models reasoning, which identifies the precise linguistic cues leading to a spam prediction, helps to find the best context specific whitelist rule. This way misclassified emails are not only saved but the filter also gets refined, hence, this is a great use of explainable models for real, reliable cybersecurity. |
| Keywords | Explainable AI, LIME, Spam Detection, Human-in-the-Loop, Random Forest, Cybersecurity |
| Field | Computer > Artificial Intelligence / Simulation / Virtual Reality |
| Published In | Volume 8, Issue 3, May-June 2026 |
| Published On | 2026-05-08 |
Share this

E-ISSN 2582-2160
CrossRef DOI is assigned to each research paper published in our journal.
IJFMR DOI prefix is
10.36948/ijfmr
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.
Powered by Sky Research Publication and Journals