International Journal For Multidisciplinary Research
E-ISSN: 2582-2160
•
Impact Factor: 9.24
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Home
Research Paper
Submit Research Paper
Publication Guidelines
Publication Charges
Upload Documents
Track Status / Pay Fees / Download Publication Certi.
Editors & Reviewers
View All
Join as a Reviewer
Get Membership Certificate
Current Issue
Publication Archive
Conference
Publishing Conf. with IJFMR
Upcoming Conference(s) ↓
SJC-2026
Conferences Published ↓
AIMAR-2025
SVGASCA-2025
ICCE-2025
ICMESS-24
Chinai-2023
PIPRDA-2023
ICMRS'23
ICCAIoT23
Contact Us
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 8 Issue 1
January-February 2026
Indexing Partners
Ethical Decision-Making in Autonomous Robots Using Reinforcement Learning: A Comprehensive Exploration
| Author(s) | Sridev Anoop |
|---|---|
| Country | India |
| Abstract | The rapid advancement of autonomous robots in domains such as healthcare, transportation, and disaster response necessitates robust ethical decision-making frameworks to mitigate risks associated with uncertain environments. This research delves deeply into the application of reinforcement learning (RL) for enabling "safe" decisions in robots facing multiple risky outcomes. By synthesizing over 80 peer- reviewed sources, including recent 2024-2025 surveys on ethical RL and safe RL benchmarks, it systematically addresses three pivotal research questions: (1) How should a robot choose between two risky paths? (2) Can RL augmented with reward shaping effectively reduce harm in unpredictable environments? (3) How do prominent RL algorithms compare in terms of safety efficacy in robotic applications? Key findings indicate that multi-objective RL (MORL) and constrained policy optimization variants, such as Proximal Policy Optimization (PPO) with Lagrangian constraints, outperform traditional RL in balancing ethical imperatives with task performance, achieving up to 40% reductions in violation rates in simulated benchmarks. Reward shaping, particularly potential-based and ethics-informed variants, accelerates ethical convergence by 3-6x while curbing reward hacking—a common pitfall where agents exploit reward loopholes. A novel student-led simulation in a custom gridworld environment demonstrates that shaped rewards enhance average returns by approximately 12% with near-zero harm incidents over 1,000 episodes. This expanded study proposes an integrated Ethical Safe RL (ESRL) framework, incorporating human-in-the-loop feedback and verifiable constraints, tailored for accessible implementation in educational settings. It underscores persistent challenges like distributional shifts in real-world deployment and calls for interdisciplinary ethics integration. By bridging theoretical surveys with empirical simulations, this work empowers high-school researchers to contribute to responsible AI robotics. |
| Keywords | Reinforcement Learning, Ethical AI, Safe Robotics, Reward Shaping, Multi- Objective Optimization. |
| Field | Engineering |
| Published In | Volume 8, Issue 1, January-February 2026 |
| Published On | 2026-01-07 |
| DOI | https://doi.org/10.36948/ijfmr.2026.v08i01.65964 |
| Short DOI | https://doi.org/hbjmhz |
Share this

E-ISSN 2582-2160
CrossRef DOI is assigned to each research paper published in our journal.
IJFMR DOI prefix is
10.36948/ijfmr
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.