International Journal For Multidisciplinary Research

E-ISSN: 2582-2160     Impact Factor: 9.24

A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal

Call for Paper Volume 8, Issue 2 (March-April 2026) Submit your research before last 3 days of April to publish your research paper in the issue of March-April.

Abstractive Text Summarization: A Systematic Review of Techniques, Evaluation, and Future Directions

Author(s) Ms. Anu Saini, Jyoti Vashishtha, Sunita Beniwal
Country India
Abstract Digital text is growing at an exponential rate, which makes effective Automatic Text Summarization (ATS) more important than ever. This paper gives a full overview of abstractive text summarization, including how it has changed over time, how it is evaluated, the problems it still faces, and where future research should go. We look at the most important architectural paradigms, starting with the early Sequence-to-Sequence (Seq2Seq) models that were improved by attention and pointer-generator mechanisms. We then move on to the "pre-train and fine-tune" era, which was defined by Transformer-based models like BART, T5, and PEGASUS. The review ends with the current state-of-the-art, which is marked by the rise of Large Language Models (LLMs) that use zero-shot and few-shot prompting. Our research shows that there is a basic trade-off: modern models, especially LLMs, are the most fluent and coherent, but they often fail because they are not factually consistent, which leads to the problem of "hallucination." The field also has problems with strong evaluation that go beyond lexical overlap metrics like ROUGE, high computational costs, and limited adaptability to new domains. Researchers will work on solving these problems in the future by using methods like retrieval-augmented generation (RAG), making evaluation metrics that are more aware of facts, and making models that are more efficient, controllable, and fair. In the end, the field is moving toward hybrid systems that combine the generative power of LLMs with the factual reliability of outside knowledge. The goal is to make summaries that are not only like humans but also useful in the real world.
Keywords Abstractive Text Summarization, Large Language Models (LLMs), Transformers, Sequence-to-Sequence (Seq2Seq) Models, Factual Consistency, Hallucination, ROUGE, Natural Language Processing (NLP), Text Summarization.
Field Computer > Artificial Intelligence / Simulation / Virtual Reality
Published In Volume 7, Issue 5, September-October 2025
Published On 2025-10-10
DOI https://doi.org/10.36948/ijfmr.2025.v07i05.54789

Share this