International Journal For Multidisciplinary Research
E-ISSN: 2582-2160
•
Impact Factor: 9.24
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Home
Research Paper
Submit Research Paper
Publication Guidelines
Publication Charges
Upload Documents
Track Status / Pay Fees / Download Publication Certi.
Editors & Reviewers
View All
Join as a Reviewer
Get Membership Certificate
Current Issue
Publication Archive
Conference
Publishing Conf. with IJFMR
Upcoming Conference(s) ↓
Conferences Published ↓
IC-AIRCM-T3-2026
SPHERE-2025
AIMAR-2025
SVGASCA-2025
ICCE-2025
Chinai-2023
PIPRDA-2023
ICMRS'23
Contact Us
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 8 Issue 2
March-April 2026
Indexing Partners
Explainable Brain Tumor Detection Using Deep Learning Models with Quantitative Explainability Metrics
| Author(s) | Alvita Mary D silva, Prof. Shwetha S, Leesha H U, Monith Monnappa U M, Dharshan B R |
|---|---|
| Country | India |
| Abstract | Brain tumor detection using Magnetic Resonance Imaging (MRI) plays a crucial role in early diagnosis and treatment planning. Deep learning models have demonstrated high accuracy in automating this task; however, their black-box nature limits clinical trust. This research presents a comprehensive and explainable brain tumor detection framework using four state-of-the-art deep learning architectures: Xception, ResNet50, DenseNet121, and EfficientNetB4. In addition to conventional performance metrics, explainability is integrated using Gradient-weighted Class Activation Mapping (Grad-CAM) and SHapley Additive exPlanations (SHAP) to provide both region-level and pixel-level interpretability. Furthermore, quantitative explainability metrics including Insertion Test, Deletion Test, Sensitivity-N, Average Confidence Drop, and Average Confidence Gain are employed to objectively evaluate explanation faithfulness. Experimental results on MRI datasets demonstrate that Xception and EfficientNet models achieve superior classification performance, while the explainability analysis offers deeper insights into model reliability and clinical trustworthiness. The proposed framework enhances transparency, robustness, and clinical applicability of AI-based brain tumor detection systems. |
| Keywords | Brain Tumor Detection, Deep Learning, MRI Images, Explainable AI, Grad-CAM, Quantitative Explainability Metrics |
| Field | Computer > Artificial Intelligence / Simulation / Virtual Reality |
| Published In | Volume 8, Issue 1, January-February 2026 |
| Published On | 2026-02-18 |
Share this

E-ISSN 2582-2160
CrossRef DOI is assigned to each research paper published in our journal.
IJFMR DOI prefix is
10.36948/ijfmr
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.
Powered by Sky Research Publication and Journals