
International Journal For Multidisciplinary Research
E-ISSN: 2582-2160
•
Impact Factor: 9.24
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Home
Research Paper
Submit Research Paper
Publication Guidelines
Publication Charges
Upload Documents
Track Status / Pay Fees / Download Publication Certi.
Editors & Reviewers
View All
Join as a Reviewer
Get Membership Certificate
Current Issue
Publication Archive
Conference
Publishing Conf. with IJFMR
Upcoming Conference(s) ↓
WSMCDD-2025
GSMCDD-2025
AIMAR-2025
Conferences Published ↓
ICCE (2025)
RBS:RH-COVID-19 (2023)
ICMRS'23
PIPRDA-2023
Contact Us
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 7 Issue 4
July-August 2025
Indexing Partners



















Transforming Human-ai Interactions through Reinforcement Learning from Human Feedback and Ai Feedback: a Human-ai Classification Report
Author(s) | RATNESH KUMAR SHARMA, Prof. Dr. SATYA SINGH |
---|---|
Country | India |
Abstract | AI systems are progressively being implemented across diverse disciplines and application areas. This increase has intensified scientific emphasis and public apprehension regarding the active involvement of humans in the development, operation, and adoption of these systems. Notwithstanding this apprehension, the majority of current scholarship on AI and Human–Computer Interaction (HCI) predominantly on elucidating the functionality of AI systems and, occasionally, enabling users to challenge AI determinations. This research aims to assess the efficacy and dependability of a hybrid feedback-driven learning methodology utilizing a classification model trained on multi-class human-labelled data. The methodology entails encoding diagnostic labels into numerical classes via LabelEncoder and implementing a reinforcement learning framework that incorporates both human-curated and AI-generated feedback. The classification report demonstrates exceptional performance across all categories, with an overall accuracy of 0.99. Precision, recall, and F1-score metrics typically approach 1.00, indicating negligible classification errors and robust generalizability. Class 2 has a somewhat lower precision of 0.94 but 100% recall, which means that there are false positives but no missed real events. The macro and weighted averages for all metrics are 0.99 or higher, which shows that the method works effectively even though the classes are not evenly distributed. The results showed that RLHF and RLAIF make AI decision-making better when there are more than one class. These results have an effect on AI systems that work with people in healthcare, self-driving cars, and personalised decision-making, where accuracy and ethics are very important. |
Keywords | Reinforcement Learning; Artificial Intelligence; Human – AI Interactions; Human–Computer Interaction; Human Feedback; Decision Making; AI classification. |
Field | Engineering |
Published In | Volume 7, Issue 4, July-August 2025 |
Published On | 2025-08-13 |
DOI | https://doi.org/10.36948/ijfmr.2025.v07i04.52186 |
Short DOI | https://doi.org/g9w7j2 |
Share this

E-ISSN 2582-2160

CrossRef DOI is assigned to each research paper published in our journal.
IJFMR DOI prefix is
10.36948/ijfmr
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.
