
International Journal For Multidisciplinary Research
E-ISSN: 2582-2160
•
Impact Factor: 9.24
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Home
Research Paper
Submit Research Paper
Publication Guidelines
Publication Charges
Upload Documents
Track Status / Pay Fees / Download Publication Certi.
Editors & Reviewers
View All
Join as a Reviewer
Get Membership Certificate
Current Issue
Publication Archive
Conference
Publishing Conf. with IJFMR
Upcoming Conference(s) ↓
WSMCDD-2025
GSMCDD-2025
Conferences Published ↓
RBS:RH-COVID-19 (2023)
ICMRS'23
PIPRDA-2023
Contact Us
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 7 Issue 2
March-April 2025
Indexing Partners



















Evaluating Generative AI: Challenges, Methods, and Future Directions
Author(s) | Latha Ramamoorthy |
---|---|
Country | United States |
Abstract | Generative Artificial Intelligence (AI) is transforming industries by producing high-quality text, images, music, and code. Its applications extend to natural language processing, computer vision, and creative arts. However, assessing these systems' performance and impact remains challenging due to their complexity, subjectivity, and open-ended outputs. This paper comprehensively reviews evaluation methods for generative AI, beginning with its evolution and major applications, including advanced models like GPT, DALL·E, and AlphaCode. It categorizes evaluation approaches into quantitative metrics (such as BLEU and FID) and qualitative methods (human assessment and user-centered testing). Key challenges, such as subjectivity, bias, and scalability, are explored alongside emerging trends like automated evaluation tools, ethical impact assessments, and multimodal techniques. Through real-world case studies, this paper highlights practical evaluation strategies and their limitations. By integrating current best practices and identifying future research opportunities, this study aims to guide the development of reliable, fair, and comprehensive evaluation frameworks for generative AI systems. |
Keywords | AI Evaluation, Generative AI, AI Performance Metrics, Human-AI Assessment, Bias in AI |
Field | Computer > Artificial Intelligence / Simulation / Virtual Reality |
Published In | Volume 7, Issue 1, January-February 2025 |
Published On | 2025-02-20 |
DOI | https://doi.org/10.36948/ijfmr.2025.v07i01.37182 |
Short DOI | https://doi.org/g85svh |
Share this

E-ISSN 2582-2160

CrossRef DOI is assigned to each research paper published in our journal.
IJFMR DOI prefix is
10.36948/ijfmr
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.
