International Journal For Multidisciplinary Research
E-ISSN: 2582-2160
•
Impact Factor: 9.24
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Home
Research Paper
Submit Research Paper
Publication Guidelines
Publication Charges
Upload Documents
Track Status / Pay Fees / Download Publication Certi.
Editors & Reviewers
View All
Join as a Reviewer
Get Membership Certificate
Current Issue
Publication Archive
Conference
Publishing Conf. with IJFMR
Upcoming Conference(s) ↓
Conferences Published ↓
IC-AIRCM-T3-2026
SPHERE-2025
AIMAR-2025
SVGASCA-2025
ICCE-2025
Chinai-2023
PIPRDA-2023
ICMRS'23
Contact Us
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 8 Issue 2
March-April 2026
Indexing Partners
Structural Hallucination in LLMs: A Formal Characterization and Mitigation Method
| Author(s) | Mohan Siva Krishna Konakanchi |
|---|---|
| Country | United States |
| Abstract | Large Language Models (LLMs) have advanced natural language processing, yet they suffer from hallucinations—outputs that seem plausible but lack factual grounding. This paper identifies a novel category, ”structural hallucinations,” where generated content mimics the syntactic and semantic structure of truthful information without verifiable basis. We formally characterize these hallucinations using graph-theoretic semantic representations and probabilistic verifiability measures. To mitigate them, we propose a trust metric-based federated learning (FL) framework that ensures integrity and accountability across distributed data silos. This framework introduces a trust score balancing explainability and performance, optimized via multi-objective reinforcement learning. We quantify the trade-off between explainability and performance, enabling precise control over hallucination risks. Extensive experiments on benchmark datasets demonstrate a 45% reduction in structural hallucinations while maintaining high performance. This work enhances the reliability of LLMs, with implications for highstakes applications. |
| Keywords | Large Language Models, Structural Hallucination, Federated Learning, Trust Metrics, Explainability, Performance Trade-off |
| Field | Engineering |
| Published In | Volume 2, Issue 5, September-October 2020 |
| Published On | 2020-09-10 |
| DOI | https://doi.org/10.36948/ijfmr.2020.v02i05.61072 |
Share this

E-ISSN 2582-2160
CrossRef DOI is assigned to each research paper published in our journal.
IJFMR DOI prefix is
10.36948/ijfmr
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.
Powered by Sky Research Publication and Journals