International Journal For Multidisciplinary Research
E-ISSN: 2582-2160
•
Impact Factor: 9.24
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Home
Research Paper
Submit Research Paper
Publication Guidelines
Publication Charges
Upload Documents
Track Status / Pay Fees / Download Publication Certi.
Editors & Reviewers
View All
Join as a Reviewer
Get Membership Certificate
Current Issue
Publication Archive
Conference
Publishing Conf. with IJFMR
Upcoming Conference(s) ↓
Conferences Published ↓
IC-AIRCM-T3-2026
SPHERE-2025
AIMAR-2025
SVGASCA-2025
ICCE-2025
Chinai-2023
PIPRDA-2023
ICMRS'23
Contact Us
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 8 Issue 2
March-April 2026
Indexing Partners
Human Vulnerability, Machine Law: Ethical Risks and Global Governance Challenges in Generative AI
| Author(s) | Mr. Kaukab Saqlain Ansari |
|---|---|
| Country | India |
| Abstract | Generative Artificial Intelligence (GenAI) is rapidly transforming digital engagement, but its expanding capabilities introduce novel ethical and legal dilemmas. This paper investigates the under-examined risks of using GenAI models as informal life coaches and therapists, in addition to the other notable differences in regional AI governance, especially in Asia and Africa. Utilizing a mixed-methods design, this study integrates AI simulation testing of four leading models (ChatGPT, Gemini, Claude, and Grok) against the Columbia Suicide Severity Rating Scale (C-SSRS) combined with survey data from 592 users about using AI as life coach or therapist. Results indicate a difference between the AI models which prioritise safety; specifically, the AI model either disengaged from, or provided repetitive, or generic advice, when assessing a high-risk situation where a user might harm themself. The survey also questions the usefulness of AI chatbots—where most users stated that they were “not at all helpful” (55.4%) in providing emotional support , users were “never better ” (56.7%) after using AI as a therapist and prefer human therapists (79.2%) for privacy reasons. Further, the analysis suggests a clear global split in policy; with the EU providing a proactive regulatory framework as opposed to legal and enforcement gaps in the Global South. Overall, this research suggests the use of GenAI for mental health support is premature and facilitates significant risk to vulnerable users. Finally, this paper argues for urgent, risk-based regulations, hybrid models of AI and human support and user-centred research, to create ethical guardrails for responsible innovations in AI. |
| Keywords | Generative AI, AI Ethics, Mixed-Methods Research, Regulatory Gaps, Global AI Policy |
| Field | Computer > Artificial Intelligence / Simulation / Virtual Reality |
| Published In | Volume 7, Issue 5, September-October 2025 |
| Published On | 2025-09-29 |
| DOI | https://doi.org/10.36948/ijfmr.2025.v07i05.56786 |
Share this

E-ISSN 2582-2160
CrossRef DOI is assigned to each research paper published in our journal.
IJFMR DOI prefix is
10.36948/ijfmr
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.
Powered by Sky Research Publication and Journals