International Journal For Multidisciplinary Research

E-ISSN: 2582-2160     Impact Factor: 9.24

A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal

Call for Paper Volume 8, Issue 2 (March-April 2026) Submit your research before last 3 days of April to publish your research paper in the issue of March-April.

Structural Hallucination in LLMs: A Formal Characterization and Mitigation Method

Author(s) Mohan Siva Krishna Konakanchi
Country United States
Abstract Large Language Models (LLMs) have advanced natural language processing, yet they suffer from hallucinations—outputs that seem plausible but lack factual grounding. This paper identifies a novel category, ”structural hallucinations,” where generated content mimics the syntactic and semantic structure of truthful information without verifiable basis. We formally characterize these hallucinations using graph-theoretic semantic representations and probabilistic verifiability measures. To mitigate them, we propose a trust metric-based federated learning (FL) framework that ensures integrity and accountability across distributed data silos. This framework introduces a trust score balancing explainability and performance, optimized via multi-objective reinforcement learning. We quantify the trade-off between explainability and performance, enabling precise control over hallucination risks. Extensive experiments on benchmark datasets demonstrate a 45% reduction in structural hallucinations while maintaining high performance. This work enhances the reliability of LLMs, with implications for highstakes applications.
Keywords Large Language Models, Structural Hallucination, Federated Learning, Trust Metrics, Explainability, Performance Trade-off
Field Engineering
Published In Volume 2, Issue 5, September-October 2020
Published On 2020-09-10
DOI https://doi.org/10.36948/ijfmr.2020.v02i05.61072

Share this