International Journal For Multidisciplinary Research
E-ISSN: 2582-2160
•
Impact Factor: 9.24
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Home
Research Paper
Submit Research Paper
Publication Guidelines
Publication Charges
Upload Documents
Track Status / Pay Fees / Download Publication Certi.
Editors & Reviewers
View All
Join as a Reviewer
Get Membership Certificate
Current Issue
Publication Archive
Conference
Publishing Conf. with IJFMR
Upcoming Conference(s) ↓
Conferences Published ↓
IC-AIRCM-T3-2026
SPHERE-2025
AIMAR-2025
SVGASCA-2025
ICCE-2025
Chinai-2023
PIPRDA-2023
ICMRS'23
Contact Us
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 8 Issue 2
March-April 2026
Indexing Partners
Abstractive Text Summarization: A Systematic Review of Techniques, Evaluation, and Future Directions
| Author(s) | Ms. Anu Saini, Jyoti Vashishtha, Sunita Beniwal |
|---|---|
| Country | India |
| Abstract | Digital text is growing at an exponential rate, which makes effective Automatic Text Summarization (ATS) more important than ever. This paper gives a full overview of abstractive text summarization, including how it has changed over time, how it is evaluated, the problems it still faces, and where future research should go. We look at the most important architectural paradigms, starting with the early Sequence-to-Sequence (Seq2Seq) models that were improved by attention and pointer-generator mechanisms. We then move on to the "pre-train and fine-tune" era, which was defined by Transformer-based models like BART, T5, and PEGASUS. The review ends with the current state-of-the-art, which is marked by the rise of Large Language Models (LLMs) that use zero-shot and few-shot prompting. Our research shows that there is a basic trade-off: modern models, especially LLMs, are the most fluent and coherent, but they often fail because they are not factually consistent, which leads to the problem of "hallucination." The field also has problems with strong evaluation that go beyond lexical overlap metrics like ROUGE, high computational costs, and limited adaptability to new domains. Researchers will work on solving these problems in the future by using methods like retrieval-augmented generation (RAG), making evaluation metrics that are more aware of facts, and making models that are more efficient, controllable, and fair. In the end, the field is moving toward hybrid systems that combine the generative power of LLMs with the factual reliability of outside knowledge. The goal is to make summaries that are not only like humans but also useful in the real world. |
| Keywords | Abstractive Text Summarization, Large Language Models (LLMs), Transformers, Sequence-to-Sequence (Seq2Seq) Models, Factual Consistency, Hallucination, ROUGE, Natural Language Processing (NLP), Text Summarization. |
| Field | Computer > Artificial Intelligence / Simulation / Virtual Reality |
| Published In | Volume 7, Issue 5, September-October 2025 |
| Published On | 2025-10-10 |
| DOI | https://doi.org/10.36948/ijfmr.2025.v07i05.54789 |
Share this

E-ISSN 2582-2160
CrossRef DOI is assigned to each research paper published in our journal.
IJFMR DOI prefix is
10.36948/ijfmr
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.
Powered by Sky Research Publication and Journals