The Application and Ethical Implication of Generative AI in Mental Health: Systematic Review

JMIR Ment Health. 2025 Jun 27:12:e70610. doi: 10.2196/70610.

Abstract

Background: Mental health disorders affect an estimated 1 in 8 individuals globally, yet traditional interventions often face barriers, such as limited accessibility, high costs, and persistent stigma. Recent advancements in generative artificial intelligence (GenAI) have introduced AI systems capable of understanding and producing humanlike language in real time. These developments present new opportunities to enhance mental health care.

Objective: We aimed to systematically examine the current applications of GenAI in mental health, focusing on 3 core domains: diagnosis and assessment, therapeutic tools, and clinician support. In addition, we identified and synthesized key ethical issues reported in the literature.

Methods: We conducted a comprehensive literature search, following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 guidelines, in PubMed, ACM Digital Library, Scopus, Embase, PsycInfo, and Google Scholar databases to identify peer-reviewed studies published from October 1, 2019, to September 30, 2024. After screening 783 records, 79 (10.1%) studies met the inclusion criteria.

Results: The number of studies on GenAI applications in mental health has grown substantially since 2023. Studies on diagnosis and assessment (37/79, 47%) primarily used GenAI models to detect depression and suicidality through text data. Studies on therapeutic applications (20/79, 25%) investigated GenAI-based chatbots and adaptive systems for emotional and behavioral support, reporting promising outcomes but revealing limited real-world deployment and safety assurance. Clinician support studies (24/79, 30%) explored GenAI's role in clinical decision-making, documentation and summarization, therapy support, training and simulation, and psychoeducation. Ethical concerns were consistently reported across the domains. On the basis of these findings, we proposed an integrative ethical framework, GenAI4MH, comprising 4 core dimensions-data privacy and security, information integrity and fairness, user safety, and ethical governance and oversight-to guide the responsible use of GenAI in mental health contexts.

Conclusions: GenAI shows promise in addressing the escalating global demand for mental health services. They may augment traditional approaches by enhancing diagnostic accuracy, offering more accessible support, and reducing clinicians' administrative burden. However, to ensure ethical and effective implementation, comprehensive safeguards-particularly around privacy, algorithmic bias, and responsible user engagement-must be established.

Keywords: generative AI; large language models; mental health; mental health detection and diagnosis; therapeutic chatbots.

Publication types

  • Systematic Review
  • Review

MeSH terms

  • Artificial Intelligence* / ethics
  • Humans
  • Mental Disorders* / diagnosis
  • Mental Disorders* / therapy
  • Mental Health* / ethics