International Journal For Multidisciplinary Research

E-ISSN: 2582-2160     Impact Factor: 9.24

A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal

Call for Paper Volume 8, Issue 2 (March-April 2026) Submit your research before last 3 days of April to publish your research paper in the issue of March-April.

Systematic Literature Review On Explainable AI In Finance: Methods, Applications, And Research Gaps

Author(s) Mr. Alok Bhardwaj
Country India
Abstract This systematic literature review (SLR) on finance's Explainable Artificial Intelligence (XAI) uses bibliometric mapping and topic synthesis of 162 Scopus-indexed publications from 2010–2025. Credit scoring, fraud detection, algorithmic trading, and risk management are being transformed by AI and machine learning, yet “black-box” algorithms still hinder financial organizations. SHAP, LIME, Integrated Gradients, and Layer-wise Relevance Propagation increase model transparency, accountability, and confidence. Their finance utilization is disorganized and cognitively immature.
PRISMA-2020 and Kitchenham's (2007) methodologies are used to identify, screen, and synthesize peer-reviewed publications to map the conceptual, methodological, and application landscape of XAI in finance. After 2018, publishing growth exploded, aligning with legislative frameworks like the EU AI Act and institutional demand for explainability. Thematic grouping shows three primary study areas: post-hoc vs intrinsic interpretability methodological improvements, domain applications in credit risk, fraud detection, and portfolio optimization, and governance problems including fairness, auditability, and regulatory compliance. Decision trees, GAMs, and emerging hybrid frameworks balance accuracy and transparency as intrinsic interpretability techniques, whereas SHAP and LIME dominate post-hoc procedures.
The review highlighted institutional acceptability issues due to a lack of explainability standards, overreliance on post-hoc attribution methods, insufficient behavioral-trust theory integration, and incorrect regulatory audit procedures. Combining XAI, perceived trust, adoption, and financial outcomes addresses these issues. Trust measures algorithmic transparency for decision quality, compliance, and sustainability. This paradigm makes explainability a technological and organizational capability for trustworthy AI governance.
(1) Adds reliable bibliometric evidence to a fragmented subject, (2) Introduces trust-based XAI adoption, (3) Connects explainability to ethical and sustainable finance.
For responsible digital transformation, XAI makes efficient, auditable, and socially accountable algorithmic finance decisions, study finds.
Keywords Explainable Artificial Intelligence (XAI); Financial Technology; Trust; Interpretability; Model Transparency; PRISMA Systematic Review; Ethical AI; Responsible Finance.
Field Sociology > Banking / Finance
Published In Volume 7, Issue 5, September-October 2025
Published On 2025-10-28
DOI https://doi.org/10.36948/ijfmr.2025.v07i05.58963

Share this