International Journal For Multidisciplinary Research
E-ISSN: 2582-2160
•
Impact Factor: 9.24
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Home
Research Paper
Submit Research Paper
Publication Guidelines
Publication Charges
Upload Documents
Track Status / Pay Fees / Download Publication Certi.
Editors & Reviewers
View All
Join as a Reviewer
Get Membership Certificate
Current Issue
Publication Archive
Conference
Publishing Conf. with IJFMR
Upcoming Conference(s) ↓
Conferences Published ↓
IC-AIRCM-T3-2026
SPHERE-2025
AIMAR-2025
SVGASCA-2025
ICCE-2025
Chinai-2023
PIPRDA-2023
ICMRS'23
Contact Us
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 8 Issue 2
March-April 2026
Indexing Partners
Systematic Literature Review On Explainable AI In Finance: Methods, Applications, And Research Gaps
| Author(s) | Mr. Alok Bhardwaj |
|---|---|
| Country | India |
| Abstract | This systematic literature review (SLR) on finance's Explainable Artificial Intelligence (XAI) uses bibliometric mapping and topic synthesis of 162 Scopus-indexed publications from 2010–2025. Credit scoring, fraud detection, algorithmic trading, and risk management are being transformed by AI and machine learning, yet “black-box” algorithms still hinder financial organizations. SHAP, LIME, Integrated Gradients, and Layer-wise Relevance Propagation increase model transparency, accountability, and confidence. Their finance utilization is disorganized and cognitively immature. PRISMA-2020 and Kitchenham's (2007) methodologies are used to identify, screen, and synthesize peer-reviewed publications to map the conceptual, methodological, and application landscape of XAI in finance. After 2018, publishing growth exploded, aligning with legislative frameworks like the EU AI Act and institutional demand for explainability. Thematic grouping shows three primary study areas: post-hoc vs intrinsic interpretability methodological improvements, domain applications in credit risk, fraud detection, and portfolio optimization, and governance problems including fairness, auditability, and regulatory compliance. Decision trees, GAMs, and emerging hybrid frameworks balance accuracy and transparency as intrinsic interpretability techniques, whereas SHAP and LIME dominate post-hoc procedures. The review highlighted institutional acceptability issues due to a lack of explainability standards, overreliance on post-hoc attribution methods, insufficient behavioral-trust theory integration, and incorrect regulatory audit procedures. Combining XAI, perceived trust, adoption, and financial outcomes addresses these issues. Trust measures algorithmic transparency for decision quality, compliance, and sustainability. This paradigm makes explainability a technological and organizational capability for trustworthy AI governance. (1) Adds reliable bibliometric evidence to a fragmented subject, (2) Introduces trust-based XAI adoption, (3) Connects explainability to ethical and sustainable finance. For responsible digital transformation, XAI makes efficient, auditable, and socially accountable algorithmic finance decisions, study finds. |
| Keywords | Explainable Artificial Intelligence (XAI); Financial Technology; Trust; Interpretability; Model Transparency; PRISMA Systematic Review; Ethical AI; Responsible Finance. |
| Field | Sociology > Banking / Finance |
| Published In | Volume 7, Issue 5, September-October 2025 |
| Published On | 2025-10-28 |
| DOI | https://doi.org/10.36948/ijfmr.2025.v07i05.58963 |
Share this

E-ISSN 2582-2160
CrossRef DOI is assigned to each research paper published in our journal.
IJFMR DOI prefix is
10.36948/ijfmr
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.
Powered by Sky Research Publication and Journals