1
|
Jesudason D, Bacchi S, Bastiampillai T. Artificial intelligence (AI) in psychotherapy: A challenging frontier. Australas Psychiatry 2025:10398562251346075. [PMID: 40421579 DOI: 10.1177/10398562251346075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/28/2025]
Abstract
ObjectiveArtificial intelligence (AI) chatbots have emerged as a potential tool to revolutionise mental health care, offering innovative solutions for the diagnosis and management of psychiatric conditions. AI psychotherapy is being trialled as a possible replacement or adjunct to traditional human-led therapy, showing promise in enhancing the accessibility and personalisation of mental health care. This paper seeks to explore the potential risks of AI for psychotherapy.ConclusionsAI psychotherapy represents relatively unchartered territory. There are concerns surrounding the trainability of AI chatbots, as well as the ultimate ability for an AI to effectively deliver human-like care. We must also consider other consequences, such as the potential for technological misuse. Thus, as AI continues to evolve, we must approach its integration with caution, and ensure the necessary regulatory mechanisms are in place for its effective and equitable implementation.
Collapse
Affiliation(s)
- Daniel Jesudason
- Faculty of Health & Medical Sciences, The University of Adelaide, Adelaide, SA, Australia
| | - Stephen Bacchi
- Faculty of Health & Medical Sciences, The University of Adelaide, Adelaide, SA, Australia
- Department of Neurology, Harvard Medical School, Boston, MA, USA
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | - Tarun Bastiampillai
- College of Medicine & Public Health, Flinders University, Bedford Park, SA, Australia
- Department of Psychiatry, Monash University, Clayton, VIC, Australia
- Division of Mental Health, Flinders Medical Centre, Bedford Park, SA, Australia
| |
Collapse
|
2
|
Scholich T, Barr M, Wiltsey Stirman S, Raj S. A Comparison of Responses from Human Therapists and Large Language Model-Based Chatbots to Assess Therapeutic Communication: Mixed Methods Study. JMIR Ment Health 2025; 12:e69709. [PMID: 40397927 DOI: 10.2196/69709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2024] [Revised: 03/25/2025] [Accepted: 03/30/2025] [Indexed: 05/23/2025] Open
Abstract
BACKGROUND Consumers are increasingly using large language model-based chatbots to seek mental health advice or intervention due to ease of access and limited availability of mental health professionals. However, their suitability and safety for mental health applications remain underexplored, particularly in comparison to professional therapeutic practices. OBJECTIVE This study aimed to evaluate how general-purpose chatbots respond to mental health scenarios and compare their responses to those provided by licensed therapists. Specifically, we sought to identify chatbots' strengths and limitations, as well as the ethical and practical considerations necessary for their use in mental health care. METHODS We conducted a mixed methods study to compare responses from chatbots and licensed therapists to scripted mental health scenarios. We created 2 fictional scenarios and prompted 3 chatbots to create 6 interaction logs. We recruited 17 therapists and conducted study sessions that consisted of 3 activities. First, therapists responded to the 2 scenarios using a Qualtrics form. Second, therapists went through the 6 interaction logs using a think-aloud procedure to highlight their thoughts about the chatbots' responses. Finally, we conducted a semistructured interview to explore subjective opinions on the use of chatbots for supporting mental health. The study sessions were analyzed using thematic analysis. The interaction logs from chatbot and therapist responses were coded using the Multitheoretical List of Therapeutic Interventions codes and then compared to each other. RESULTS We identified 7 themes describing the strengths and limitations of the chatbots as compared to therapists. These include elements of good therapy in chatbot responses, conversational style of chatbots, insufficient inquiry and feedback seeking by chatbots, chatbot interventions, client engagement, chatbots' responses to crisis situations, and considerations for chatbot-based therapy. In the use of Multitheoretical List of Therapeutic Interventions codes, we found that therapists evoked more elaboration (Mann-Whitney U=9; P=.001) and used more self-disclosure (U=45.5; P=.37) as compared to the chatbots. The chatbots used affirming (U=28; P=.045) and reassuring (U=23; P=.02) language more often than the therapists. The chatbots also used psychoeducation (U=22.5; P=.02) and suggestions (U=12.5; P=.003) more often than the therapists. CONCLUSIONS Our study demonstrates the unsuitability of general-purpose chatbots to safely engage in mental health conversations, particularly in crisis situations. While chatbots display elements of good therapy, such as validation and reassurance, overuse of directive advice without sufficient inquiry and use of generic interventions make them unsuitable as therapeutic agents. Careful research and evaluation will be necessary to determine the impact of chatbot interactions and to identify the most appropriate use cases related to mental health.
Collapse
Affiliation(s)
- Till Scholich
- Institute for Human-Centered AI, Stanford University, Stanford, CA, United States
| | - Maya Barr
- PGSP-Stanford PsyD Consortium, Palo Alto University, Palo Alto, CA, United States
| | - Shannon Wiltsey Stirman
- Dissemination and Training Division, National Center for PTSD, Menlo Park, CA, United States
- Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States
| | - Shriti Raj
- Institute for Human-Centered AI, Stanford University, Stanford, CA, United States
- Department of Medicine Center for Biomedical Informatics Research, Stanford University, Stanford, CA, United States
| |
Collapse
|
3
|
Bouguettaya A, Team V, Stuart EM, Aboujaoude E. AI-driven report-generation tools in mental healthcare: A review of commercial tools. Gen Hosp Psychiatry 2025; 94:150-158. [PMID: 40088857 DOI: 10.1016/j.genhosppsych.2025.02.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/10/2024] [Revised: 02/21/2025] [Accepted: 02/21/2025] [Indexed: 03/17/2025]
Abstract
Artificial intelligence (AI) systems are increasingly being integrated in clinical care, including for AI-powered note-writing. We aimed to develop and apply a scale for assessing mental health electronic health records (EHRs) that use large language models (LLMs) for note-writing, focusing on their features, security, and ethics. The assessment involved analyzing product information and directly querying vendors about their systems. On their websites, the majority of vendors provided comprehensive information on data protection, privacy measures, multi-platform availability, patient access features, software update history, and Meaningful Use compliance. Most products clearly indicated the LLM's capabilities in creating customized reports or functioning as a co-pilot. However, critical information was often absent, including details on LLM training methodologies, the specific LLM used, bias correction techniques, and methods for evaluating the evidence base. The lack of transparency regarding LLM specifics and bias mitigation strategies raises concerns about the ethical implementation and reliability of these systems in clinical practice. While LLM-enhanced EHRs show promise in alleviating the documentation burden for mental health professionals, there is a pressing need for greater transparency and standardization in reporting LLM-related information. We propose recommendations for the future development and implementation of these systems to ensure they meet the highest standards of security, ethics, and clinical care.
Collapse
Affiliation(s)
- Ayoub Bouguettaya
- Department of Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA, United States; School of Nursing and Midwifery, Monash University, Melbourne, Victoria, Australia
| | - Victoria Team
- School of Nursing and Midwifery, Monash University, Melbourne, Victoria, Australia
| | - Elizabeth M Stuart
- Jonathan Jaques Children's Cancer Institute, Miller Children's & Women's Hospital Long Beach, Long Beach, CA, United States
| | - Elias Aboujaoude
- Department of Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA, United States; Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States.
| |
Collapse
|
4
|
Hamzyan Olia JB, Raman A, Hsu CY, Alkhayyat A, Nourazarian A. A comprehensive review of neurotransmitter modulation via artificial intelligence: A new frontier in personalized neurobiochemistry. Comput Biol Med 2025; 189:109984. [PMID: 40088712 DOI: 10.1016/j.compbiomed.2025.109984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Revised: 02/18/2025] [Accepted: 03/03/2025] [Indexed: 03/17/2025]
Abstract
The deployment of artificial intelligence (AI) is revolutionizing neuropharmacology and drug development, allowing the modulation of neurotransmitter systems at the personal level. This review focuses on the neuropharmacology and regulation of neurotransmitters using predictive modeling, closed-loop neuromodulation, and precision drug design. The fusion of AI with applications such as machine learning, deep-learning, and even computational modeling allows for the real-time tracking and enhancement of biological processes within the body. An exemplary application of AI is the use of DeepMind's AlphaFold to design new GABA reuptake inhibitors for epilepsy and anxiety. Likewise, Benevolent AI and IBM Watson have fast-tracked drug repositioning for neurodegenerative conditions. Furthermore, we identified new serotonin reuptake inhibitors for depression through AI screening. In addition, the application of Deep Brain Stimulation (DBS) settings using AI for patients with Parkinson's disease and for patients with major depressive disorder (MDD) using reinforcement learning-based transcranial magnetic stimulation (TMS) leads to better treatment. This review highlights other challenges including algorithm bias, ethical concerns, and limited clinical validation. Their proposal to incorporate AI with optogenetics, CRISPR, neuroprosthesis, and other advanced technologies fosters further exploration and refinement of precision neurotherapeutic approaches. By bridging computational neuroscience with clinical applications, AI has the potential to revolutionize neuropharmacology and improve patient-specific treatment strategies. We addressed critical challenges, such as algorithmic bias and ethical concerns, by proposing bias auditing, diverse datasets, explainable AI, and regulatory frameworks as practical solutions to ensure equitable and transparent AI applications in neurotransmitter modulation.
Collapse
Affiliation(s)
| | - Arasu Raman
- Faculty of Business and Communications, INTI International University, Putra Nilai, 71800, Malaysia
| | - Chou-Yi Hsu
- Thunderbird School of Global Management, Arizona State University, Tempe Campus, Phoenix, AZ, 85004, USA.
| | - Ahmad Alkhayyat
- Department of Computer Techniques Engineering, College of Technical Engineering, The Islamic University, Najaf, Iraq; Department of Computer Techniques Engineering, College of Technical Engineering, The Islamic University of Al Diwaniyah, Al Diwaniyah, Iraq; Department of Computers Techniques Engineering, College of Technical Engineering, The Islamic University of Babylon, Babylon, Iraq
| | - Alireza Nourazarian
- Department of Basic Medical Sciences, Khoy University of Medical Sciences, Khoy, Iran.
| |
Collapse
|
5
|
Mei Z, Jin S, Li W, Zhang S, Cheng X, Li Y, Wang M, Song Y, Tu W, Yin H, Wang Q, Bai Y, Xu G. Ethical risks in robot health education: A qualitative study. Nurs Ethics 2025; 32:913-930. [PMID: 39138639 DOI: 10.1177/09697330241270829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/15/2024]
Abstract
BackgroundAs health education robots may potentially become a significant support force in nursing practice in the future, it is imperative to adhere to the European Union's concept of "Responsible Research and Innovation" (RRI) and deeply reflect on the ethical risks hidden in the process of intelligent robotic health education.AimThis study explores the perceptions of professional nursing professionals regarding the potential ethical risks associated with the clinical practice of intelligent robotic health education.Research designThis study adopts a descriptive phenomenological approach, employing Colaizzi's seven-step method for data analysis.Participants and research contextWe conducted semi-structured interviews with 17 nursing professionals from tertiary comprehensive hospitals in China.Ethical considerationsThis study has been approved by the Ethics Committee of the Second Affiliated Hospital of Nanjing University of Chinese Medicine, Jiangsu Provincial Second Chinese Medicine Hospital.FindingsNursing personnel, adhering to the principles of RRI and the concept of "person-centered" care, have critically reflected on the potential ethical risks inherent in robotic health education. This reflection has primarily identified six themes: (a) threats to human dignity, (b) concerns about patient safety, (c) apprehensions about privacy disclosure, (d) worries about implicit burdens, (e) concerns about responsibility attribution, and (f) expectations for social support.ConclusionsThis study focuses on health education robots, which are perceived to have minimal ethical risks, and provides rich and detailed insights into the ethical risks associated with robotic health education. Even seemingly safe health education robots elicit significant concerns among professionals regarding their safety and ethics in clinical practice. As we move forward, it is essential to remain attentive to the potential negative impacts of robots and actively address them.
Collapse
Affiliation(s)
- ZiQi Mei
- Nanjing University of Chinese Medicine
| | | | | | - SuJu Zhang
- The Second Affiliated Hospital of Nanjing University of Chinese Medicine
| | - XiRong Cheng
- The Second Affiliated Hospital of Nanjing University of Chinese Medicine
| | - YiTing Li
- Nanjing University of Chinese Medicine
| | - Meng Wang
- Nanjing University of Chinese Medicine
| | | | | | | | - Qing Wang
- Nanjing University of Chinese Medicine
| | - YaMei Bai
- Nanjing University of Chinese Medicine
| | - GuiHua Xu
- Nanjing University of Chinese Medicine
| |
Collapse
|
6
|
Wu X, Liew K, Dorahy MJ. Trust, Anxious Attachment, and Conversational AI Adoption Intentions in Digital Counseling: A Preliminary Cross-Sectional Questionnaire Study. JMIR AI 2025; 4:e68960. [PMID: 40262137 PMCID: PMC12056427 DOI: 10.2196/68960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2024] [Revised: 01/21/2025] [Accepted: 02/23/2025] [Indexed: 04/24/2025]
Abstract
BACKGROUND Conversational artificial intelligence (CAI) is increasingly used in various counseling settings to deliver psychotherapy, provide psychoeducational content, and offer support like companionship or emotional aid. Research has shown that CAI has the potential to effectively address mental health issues when its associated risks are handled with great caution. It can provide mental health support to a wider population than conventional face-to-face therapy, and at a faster response rate and more affordable cost. Despite CAI's many advantages in mental health support, potential users may differ in their willingness to adopt and engage with CAI to support their own mental health. OBJECTIVE This study focused specifically on dispositional trust in AI and attachment styles, and examined how they are associated with individuals' intentions to adopt CAI for mental health support. METHODS A cross-sectional survey of 239 American adults was conducted. Participants were first assessed on their attachment style, then presented with a vignette about CAI use, after which their dispositional trust and subsequent adoption intentions toward CAI counseling were surveyed. Participants had not previously used CAI for digital counseling for mental health support. RESULTS Dispositional trust in artificial intelligence emerged as a critical predictor of CAI adoption intentions (P<.001), while attachment anxiety (P=.04), rather than avoidance (P=.09), was found to be positively associated with the intention to adopt CAI counseling after controlling for age and gender. CONCLUSIONS These findings indicated higher dispositional trust might lead to stronger adoption intention, and higher attachment anxiety might also be associated with greater CAI counseling adoption. Further research into users' attachment styles and dispositional trust is needed to understand individual differences in CAI counseling adoption for enhancing the safety and effectiveness of CAI-driven counseling services and tailoring interventions. TRIAL REGISTRATION Open Science Framework; https://osf.io/c2xqd.
Collapse
Affiliation(s)
- Xiaoli Wu
- School of Psychology, Speech and Hearing, University of Canterbury, Christchurch, New Zealand
| | - Kongmeng Liew
- School of Psychology, Speech and Hearing, University of Canterbury, Christchurch, New Zealand
| | - Martin J Dorahy
- School of Psychology, Speech and Hearing, University of Canterbury, Christchurch, New Zealand
| |
Collapse
|
7
|
Cecil J, Kleine AK, Lermer E, Gaube S. Mental health practitioners' perceptions and adoption intentions of AI-enabled technologies: an international mixed-methods study. BMC Health Serv Res 2025; 25:556. [PMID: 40241059 PMCID: PMC12001504 DOI: 10.1186/s12913-025-12715-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Accepted: 04/07/2025] [Indexed: 04/18/2025] Open
Abstract
BACKGROUND As mental health disorders continue to surge, exceeding the capacity of available therapeutic resources, the emergence of technologies enabled by artificial intelligence (AI) offers promising solutions for supporting and delivering patient care. However, there is limited research on mental health practitioners' understanding, familiarity, and adoption intentions regarding these AI technologies. We, therefore, examined to what extent practitioners' characteristics are associated with their learning and use intentions of AI technologies in four application domains (diagnostics, treatment, feedback, and practice management). These characteristics include medical AI readiness with its subdimensions, AI anxiety with its subdimensions, technology self-efficacy, affinity for technology interaction, and professional identification. METHODS Mixed-methods data from N = 392 German and US practitioners, encompassing psychotherapists (in training), psychiatrists, and clinical psychologists, was analyzed. A deductive thematic approach was employed to evaluate mental health practitioners' understanding and familiarity with AI technologies. Additionally, structural equation modeling (SEM) was used to examine the relationship between practitioners' characteristics and their adoption intentions for different technologies. RESULTS Qualitative analysis unveiled a substantial gap in familiarity with AI applications in mental healthcare among practitioners. While some practitioner characteristics were only associated with specific AI application areas (e.g., cognitive readiness with learning intentions for feedback tools), we found that learning intention, ethical knowledge, and affinity for technology interaction were relevant across all four application areas, underscoring their relevance in the adoption of AI technologies in mental healthcare. CONCLUSION In conclusion, this pre-registered study underscores the importance of recognizing the interplay between diverse factors for training opportunities and consequently, a streamlined implementation of AI-enabled technologies in mental healthcare.
Collapse
Affiliation(s)
- Julia Cecil
- Department of Psychology, LMU Center for Leadership and People Management, LMU Munich, Geschwister-Scholl-Platz 1, Munich, 80539, Germany.
| | - Anne-Kathrin Kleine
- Department of Psychology, LMU Center for Leadership and People Management, LMU Munich, Geschwister-Scholl-Platz 1, Munich, 80539, Germany
| | - Eva Lermer
- Department of Psychology, LMU Center for Leadership and People Management, LMU Munich, Geschwister-Scholl-Platz 1, Munich, 80539, Germany
- Department of Business Psychology, Technical University of Applied Sciences Augsburg, An der Hochschule 1, Augsburg, 86161, Germany
| | - Susanne Gaube
- UCL Global Business School for Health, University College London, 7 Sidings St, London, E20 2 AE, UK
| |
Collapse
|
8
|
Anaduaka US, Oladosu AO, Katsande S, Frempong CS, Awuku-Amador S. Leveraging artificial intelligence in the prediction, diagnosis and treatment of depression and anxiety among perinatal women in low- and middle-income countries: a systematic review. BMJ MENTAL HEALTH 2025; 28:e301445. [PMID: 40234194 PMCID: PMC12001354 DOI: 10.1136/bmjment-2024-301445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Accepted: 02/25/2025] [Indexed: 04/17/2025]
Abstract
AIM The adoption of artificial intelligence (AI) tools is gaining traction in maternal mental health (MMH) research. Despite its growing usage, little is known about its prospects and challenges in low- and middle-income countries (LMICs). This study aims to systematically review articles on the role of AI in addressing MMH in LMICs. METHODS This systematic review adopts a patient and public involvement approach to investigate the role of AI in predicting, diagnosing or treating perinatal depression and anxiety (PDA) among perinatal women in LMICs. Seven databases were searched for studies that reported on AI tools/methods for PDA published between January 2010 and July 2024. Eligible studies were identified and extracted based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines using Covidence, and the data were synthesised using thematic analysis. RESULTS Out of 2203 studies, 19 studies across eight countries were deemed eligible for extraction and synthesis. The review revealed that the supervised machine learning method was the most common AI approach and was used to improve the early detection of depression and anxiety among perinatal women. Additionally, postpartum depression was the most frequently investigated MMH condition in this study. Further, the review revealed only three conversational agents (CAs)/chatbots used to deliver psychological treatment. CONCLUSIONS The findings underscore the potential of AI-based methods in identifying risk factors and delivering psychological treatment for PDA. Future research should investigate the underlying mechanisms of the effectiveness of AI-based chatbots/CAs and assess the long-term effects for diagnosed mothers, to aid the improvement of MMH in LMICs. PROSPERO REGISTRATION NUMBER CRD42024549455.
Collapse
Affiliation(s)
| | | | | | - Clinton Sekyere Frempong
- Department of Population and Behavioural Sciences, School of Public Health, University of Health and Allied Sciences, Hohoe, Ghana
| | | |
Collapse
|
9
|
Zisquit M, Shoa A, Oliva R, Perry S, Spanlang B, Brunstein Klomek A, Slater M, Friedman D. AI-Enhanced Virtual Reality Self-Talk for Psychological Counseling: Formative Qualitative Study. JMIR Form Res 2025; 9:e67782. [PMID: 40173447 PMCID: PMC12004015 DOI: 10.2196/67782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2024] [Revised: 01/18/2025] [Accepted: 02/10/2025] [Indexed: 04/04/2025] Open
Abstract
BACKGROUND Access to mental health services continues to pose a global challenge, with current services often unable to meet the growing demand. This has sparked interest in conversational artificial intelligence (AI) agents as potential solutions. Despite this, the development of a reliable virtual therapist remains challenging, and the feasibility of AI fulfilling this sensitive role is still uncertain. One promising approach involves using AI agents for psychological self-talk, particularly within virtual reality (VR) environments. Self-talk in VR allows externalizing self-conversation by enabling individuals to embody avatars representing themselves as both patient and counselor, thus enhancing cognitive flexibility and problem-solving abilities. However, participants sometimes experience difficulties progressing in sessions, which is where AI could offer guidance and support. OBJECTIVE This formative study aims to assess the challenges and advantages of integrating an AI agent into self-talk in VR for psychological counseling, focusing on user experience and the potential role of AI in supporting self-reflection, problem-solving, and positive behavioral change. METHODS We carried out an iterative design and development of a system and protocol integrating large language models (LLMs) within VR self-talk during the first two and a half years. The design process addressed user interface, speech-to-text functionalities, fine-tuning the LLMs, and prompt engineering. Upon completion of the design process, we conducted a 3-month long exploratory qualitative study in which 11 healthy participants completed a session that included identifying a problem they wanted to address, attempting to address this problem using self-talk in VR, and then continuing self-talk in VR but this time with the assistance of an LLM-based virtual human. The sessions were carried out with a trained clinical psychologist and followed by semistructured interviews. We used applied thematic analysis after the interviews to code and develop key themes for the participants that addressed our research objective. RESULTS In total, 4 themes were identified regarding the quality of advice, the potential advantages of human-AI collaboration in self-help, the believability of the virtual human, and user preferences for avatars in the scenario. The participants rated their desire to engage in additional such sessions at 8.3 out of 10, and more than half of the respondents indicated that they preferred using VR self-talk with AI rather than without it. On average, the usefulness of the session was rated 6.9 (SD 0.54), and the degree to which it helped solve their problem was rated 6.1 (SD 1.58). Participants specifically noted that human-AI collaboration led to improved outcomes and facilitated more positive thought processes, thereby enhancing self-reflection and problem-solving abilities. CONCLUSIONS This exploratory study suggests that the VR self-talk paradigm can be enhanced by LLM-based agents and presents the ways to achieve this, potential pitfalls, and additional insights.
Collapse
Affiliation(s)
- Moreah Zisquit
- Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| | - Alon Shoa
- Sammy Ofer School of Communications, Reichman University, Herzliya, Israel
| | - Ramon Oliva
- Event Lab, University of Barcelona, Barcelona, Spain
| | - Stav Perry
- Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| | | | | | - Mel Slater
- Event Lab, University of Barcelona, Barcelona, Spain
| | - Doron Friedman
- Sammy Ofer School of Communications, Reichman University, Herzliya, Israel
| |
Collapse
|
10
|
Hofstede BM, Askari SI, Lukkien D, Gosetto L, Alberts JW, Tesfay E, ter Stal M, van Hoesel T, Cuijpers RH, Vastenburg MH, Bevilacqua R, Amabili G, Margaritini A, Benadduci M, Guebey J, Trabelsi MA, Ciuffreda I, Casaccia S, IJsselsteijn W, Revel GM, Nap HH. A field study to explore user experiences with socially assistive robots for older adults: emphasizing the need for more interactivity and personalisation. Front Robot AI 2025; 12:1537272. [PMID: 40270913 PMCID: PMC12015597 DOI: 10.3389/frobt.2025.1537272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2024] [Accepted: 02/20/2025] [Indexed: 04/25/2025] Open
Abstract
Older adults often desire to remain in their homes for as long as possible, and Socially Assistive Robots (SARs) can play a role in supporting this goal. However, the acceptance and adoption rates of SARs remain relatively low, suggesting that current designs may not fully address all user needs. Field studies in Human-Robot Interaction, particularly those involving multiple end-users, remain limited. Nevertheless, such studies are crucial for identifying factors that shape the user experience with SARs, potentially improving their acceptance and adoption in healthcare settings. Therefore, this study aims to explore user perspectives, referred to as factors, that could guide design considerations for SAR development. We conducted a field study with 90 participants across Italy, Switzerland, and the Netherlands to identify these factors and their implications for improving the SAR user experience for older adults and their formal and informal caregivers. SARs were placed in the homes of older adults, and interviews were conducted with the three groups of primary end-users, at the beginning, midpoint, and end of the two-to six-week trial period. We initially focused on four factors (personalisation, interactivity, embodiment, and ethical considerations), identified in earlier design phases of the related 3-year Guardian project. Our findings confirmed the importance of these factors while uncovering additional ones. Personalisation and interactivity emerged as the most important ones among these factors. Based on our insights, we recommend involving all primary end-users in SAR research and design process and prioritising field studies to refine design elements. In conclusion, our study identified six factors for SAR design that can enhance the user experience: personalisation, interactivity, embodiment, ethical considerations, connectedness, and dignity. These findings provide valuable guidance for developing SARs that may better address the needs of older adults and their caregivers.
Collapse
Affiliation(s)
- Bob M. Hofstede
- Vilans Centre of Expertise for Long-Term Care, Utrecht, Netherlands
- Human-Technology Interaction Group, Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, Eindhoven, Netherlands
| | | | - Dirk Lukkien
- Vilans Centre of Expertise for Long-Term Care, Utrecht, Netherlands
- Copernicus Institute of Sustainable Development, Utrecht University, Utrecht, Netherlands
| | - Laëtitia Gosetto
- EvaLab, Division of Medical Information Science (SIMED), University Hospitals of Geneva (HUG), Geneva, Switzerland
| | - Janna W. Alberts
- Human-Technology Interaction Group, Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, Eindhoven, Netherlands
- ConnectedCare Services b.v., Arnhem, Netherlands
| | - Ephrem Tesfay
- Vilans Centre of Expertise for Long-Term Care, Utrecht, Netherlands
| | - Minke ter Stal
- Vilans Centre of Expertise for Long-Term Care, Utrecht, Netherlands
| | - Tom van Hoesel
- Vilans Centre of Expertise for Long-Term Care, Utrecht, Netherlands
| | - Raymond H. Cuijpers
- Human-Technology Interaction Group, Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, Eindhoven, Netherlands
| | | | | | | | | | | | - Julie Guebey
- EvaLab, Division of Medical Information Science (SIMED), University Hospitals of Geneva (HUG), Geneva, Switzerland
| | - Mohamed Amine Trabelsi
- EvaLab, Division of Medical Information Science (SIMED), University Hospitals of Geneva (HUG), Geneva, Switzerland
| | - Ilaria Ciuffreda
- Dipartimento di Ingegneria Industriale e Scienze Matematiche, Università Politecnica delle Marche, Ancona, Italy
| | - Sara Casaccia
- Dipartimento di Ingegneria Industriale e Scienze Matematiche, Università Politecnica delle Marche, Ancona, Italy
| | - Wijnand IJsselsteijn
- Human-Technology Interaction Group, Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Gian Marco Revel
- Dipartimento di Ingegneria Industriale e Scienze Matematiche, Università Politecnica delle Marche, Ancona, Italy
| | - Henk Herman Nap
- Vilans Centre of Expertise for Long-Term Care, Utrecht, Netherlands
- Human-Technology Interaction Group, Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, Eindhoven, Netherlands
| |
Collapse
|
11
|
Immel D, Hilpert B, Schwarz P, Hein A, Gebhard P, Barton S, Hurlemann R. Patients' and Health Care Professionals' Expectations of Virtual Therapeutic Agents in Outpatient Aftercare: Qualitative Survey Study. JMIR Form Res 2025; 9:e59527. [PMID: 40138692 PMCID: PMC11982758 DOI: 10.2196/59527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 12/25/2024] [Accepted: 02/03/2025] [Indexed: 03/29/2025] Open
Abstract
BACKGROUND Depression is a serious mental health condition that can have a profound impact on the individual experiencing the disorder and those providing care. While psychotherapy and medication can be effective, there are gaps in current approaches, particularly in outpatient care. This phase is often associated with a high risk of relapse and readmission, and patients often report a lack of support. Socially interactive agents represent an innovative approach to the provision of assistance. Often powered by artificial intelligence, these virtual agents can interact socially and elicit humanlike emotions. In health care, they are used as virtual therapeutic assistants to fill gaps in outpatient aftercare. OBJECTIVE We aimed to explore the expectations of patients with depression and health care professionals by conducting a qualitative survey. Our analysis focused on research questions related to the appearance and role of the assistant, the assistant-patient interaction (time of interaction, skills and abilities of the assistant, and modes of interaction) and the therapist-assistant interaction. METHODS A 2-part qualitative study was conducted to explore the perspectives of the 2 groups (patients and care providers). In the first step, care providers (n=30) were recruited during a regional offline meeting. After a short presentation, they were given a link and were asked to complete a semistructured web-based questionnaire. Next, patients (n=20) were recruited from a clinic and were interviewed in a semistructured face-to-face interview. RESULTS The survey findings suggested that the assistant should be a multimodal communicator (voice, facial expressions, and gestures) and counteract negative self-evaluation. Most participants preferred a female assistant or wanted the option to choose the gender. In total, 24 (80%) health care professionals wanted a selectable option, while patients exhibited a marked preference for a female or diverse assistant. Regrading patient-assistant interaction, the assistant was seen as a proactive recipient of information, and the patient as a passive one. Gaps in aftercare could be filled by the unlimited availability of the assistant. However, patients should retain their autonomy to avoid dependency. The monitoring of health status was viewed positively by both groups. A biofeedback function was desired to detect early warning signs of disease. When appropriate to the situation, a sense of humor in the assistant was desirable. The desired skills of the assistant can be summarized as providing structure and emotional support, especially warmth and competence to build trust. Consistency was important for the caregiver to appear authentic. Regarding the assistant-care provider interaction, 3 key areas were identified: objective patient status measurement, emergency suicide prevention, and an information tool and decision support system for health care professionals. CONCLUSIONS Overall, the survey conducted provides innovative guidelines for the development of virtual therapeutic assistants to fill the gaps in patient aftercare.
Collapse
Affiliation(s)
- Diana Immel
- Department of Psychiatry, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Bernhard Hilpert
- Affective Computing Group, Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, German Research Center for Artificial Intelligence, Kaiserslautern, Germany
- Leiden Institute of Advanced Computer Science, Leiden University, Snellius Gebouw, Leiden, The Netherlands
- Department of Computer Science, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Patricia Schwarz
- Assistance Systems and Medical Device Technology, Department for Health Services Research, School of Medicine & Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Andreas Hein
- Assistance Systems and Medical Device Technology, Department for Health Services Research, School of Medicine & Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Patrick Gebhard
- Affective Computing Group, Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, German Research Center for Artificial Intelligence, Kaiserslautern, Germany
| | - Simon Barton
- Department of Psychiatry, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - René Hurlemann
- Department of Psychiatry, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
12
|
Zubala A, Pease A, Lyszkiewicz K, Hackett S. Art psychotherapy meets creative AI: an integrative review positioning the role of creative AI in art therapy process. Front Psychol 2025; 16:1548396. [PMID: 40181904 PMCID: PMC11965670 DOI: 10.3389/fpsyg.2025.1548396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2024] [Accepted: 03/10/2025] [Indexed: 04/05/2025] Open
Abstract
Background The rise of artificial intelligence (AI) is promising novel contributions to treatment and prevention of mental ill health. While research on the use of conversational and embodied AI in psychotherapy practice is developing rapidly, it leaves gaps in understanding of the impact that creative AI might have on art psychotherapy practice specifically. A constructive dialogue between the disciplines of creative AI and art psychotherapy is needed, to establish potential relevance of AI-bases technologies to therapeutic practice involving artmaking and creative self-expression. Methods This integrative review set out to explore whether and how creative AI could enhance the practice of art psychotherapy and other psychological interventions utilizing visual communication and/or artmaking. A transdisciplinary search strategy was developed to capture the latest research across diverse methodologies and stages of development, including reviews, opinion papers, prototype development and empirical research studies. Findings Of over 550 records screened, 10 papers were included in this review. Their key characteristics are mapped out on a matrix of stakeholder groups involved, elements of interventions belonging to art therapy domain, and the types of AI-based technologies involved. Themes of key significance for AT practice are discussed, including cultural adaptability, inclusivity and accessibility, impact on creativity and self-expression, and unpredictability and imperfection. A positioning diagram is proposed to describe the role of AI in AT. AI's role in the therapy process oscillates on a spectrum from being a partner in the co-creative process to taking the role of a curator of personalized visuals with therapeutic intent. Another dimension indicates the level of autonomy - from a supportive tool to an autonomous agent. Examples for each of these situations are identified in the reviewed literature. Conclusion While creative AI brings opportunities for new modes of self-expression and extended reach of art therapy, over-reliance on it presents risks to the therapy process, including of loss of agency for clients and therapists. Implications of AI-based technology on therapeutic relationship in psychotherapy demand further investigation, as do its cultural and psychological impacts, before the relevance of creative AI to art therapy practice can be confirmed.
Collapse
Affiliation(s)
- Ania Zubala
- Centre for Clinical Brain Sciences, College of Medicine and Veterinary Medicine, University of Edinburgh, Edinburgh, United Kingdom
| | - Alison Pease
- Department of Computing, School of Science and Engineering, University of Dundee, Dundee, United Kingdom
| | | | - Simon Hackett
- Population Health Sciences Institute, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, United Kingdom
- Cumbria, Northumberland, Tyne and Wear NHS Foundation Trust, Newcastle upon Tyne, United Kingdom
| |
Collapse
|
13
|
Morone G, De Angelis L, Martino Cinnera A, Carbonetti R, Bisirri A, Ciancarelli I, Iosa M, Negrini S, Kiekens C, Negrini F. Artificial intelligence in clinical medicine: a state-of-the-art overview of systematic reviews with methodological recommendations for improved reporting. Front Digit Health 2025; 7:1550731. [PMID: 40110115 PMCID: PMC11920125 DOI: 10.3389/fdgth.2025.1550731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2024] [Accepted: 02/12/2025] [Indexed: 03/22/2025] Open
Abstract
Medicine has become increasingly receptive to the use of artificial intelligence (AI). This overview of systematic reviews (SRs) aims to categorise current evidence about it and identify the current methodological state of the art in the field proposing a classification of AI model (CLASMOD-AI) to improve future reporting. PubMed/MEDLINE, Scopus, Cochrane library, EMBASE and Epistemonikos databases were screened by four blinded reviewers and all SRs that investigated AI tools in clinical medicine were included. 1923 articles were found, and of these, 360 articles were examined via the full-text and 161 SRs met the inclusion criteria. The search strategy, methodological, medical and risk of bias information were extracted. The CLASMOD-AI was based on input, model, data training, and performance metric of AI tools. A considerable increase in the number of SRs was observed in the last five years. The most covered field was oncology accounting for 13.9% of the SRs, with diagnosis as the predominant objective in 44.4% of the cases). The risk of bias was assessed in 49.1% of included SRs, yet only 39.2% of these used tools with specific items to assess AI metrics. This overview highlights the need for improved reporting on AI metrics, particularly regarding the training of AI models and dataset quality, as both are essential for a comprehensive quality assessment and for mitigating the risk of bias using specialized evaluation tools.
Collapse
Affiliation(s)
- Giovanni Morone
- Department of Life, Health and Environmental Sciences, University of L'Aquila, L'Aquila, Italy
- San Raffaele Institute of Sulmona, Sulmona, Italy
| | - Luigi De Angelis
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
- Italian Society of Artificial Intelligence in Medicine (SIIAM, Società Italiana Intelligenza Artificiale in Medicina), Rome, Italy
| | - Alex Martino Cinnera
- Scientific Institute for Research, Hospitalisation and Health Care IRCCS Santa Lucia Foundation, Rome, Italy
| | - Riccardo Carbonetti
- Clinical Area of Neuroscience and Neurorehabilitation, Neurofunctional Rehabilitation Unit, IRCCS "Bambino Gesù" Children's Hospital, Rome, Italy
| | | | - Irene Ciancarelli
- Department of Life, Health and Environmental Sciences, University of L'Aquila, L'Aquila, Italy
| | - Marco Iosa
- Scientific Institute for Research, Hospitalisation and Health Care IRCCS Santa Lucia Foundation, Rome, Italy
- Department of Psychology, Sapienza University of Rome, Rome, Italy
| | - Stefano Negrini
- Department of Biomedical, Surgical and Dental Sciences, University 'La Statale', Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
| | | | - Francesco Negrini
- Department of Biotechnology and Life Sciences, University of Insubria, Varese, Italy
- Istituti Clinici Scientifici Maugeri IRCCS, Tradate, Italy
| |
Collapse
|
14
|
Salmi S, Mérelle S, van Eijk N, Gilissen R, van der Mei R, Bhulai S. Real-time assistance in suicide prevention helplines using a deep learning-based recommender system: A randomized controlled trial. Int J Med Inform 2025; 195:105760. [PMID: 39705915 DOI: 10.1016/j.ijmedinf.2024.105760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2024] [Revised: 11/28/2024] [Accepted: 12/09/2024] [Indexed: 12/23/2024]
Abstract
OBJECTIVE To evaluate the effectiveness and usability of an AI-assisted tool in providing real-time assistance to counselors during suicide prevention helpline conversations. METHODS In this RCT, the intervention group used an AI-assisted tool, which generated suggestions based on sentence embeddings (i.e. BERT) from previous successful counseling sessions. Cosine similarity was used to present the top 5 chat situation to the counsellors. The control group did not have access to the tool (care as usual). Both groups completed a questionnaire assessing their self-efficacy at the end of each shift. Counselors' usage of the tool was evaluated by measuring frequency, duration and content of interactions. RESULTS In total, 48 counselors participated in the experiment: 27 counselors in the experimental condition and 21 counselors in the control condition. Together they rated 188 shifts. No significant difference in self-efficacy was observed between the two groups (p=0.36). However, counselors that used the AI-assisted tool had marginally lower response time and used the tool more often during conversations that had a longer duration. A deeper analysis of usage showed that the tool was frequently used in inappropriate situations, e.g. after the counselor had already provided a response to the help-seeker, defeating the purpose of the information. When the tool was employed appropriately (64 conversations), it provided usable information in 53 conversations (83%). However, counselors used the tool less frequently at optimal moments, indicating their potential lack of proficiency with using AI-assisted tools during helpline conversations or initial trust issues with the system. CONCLUSION The study demonstrates benefits and pitfalls of integrating AI-assisted tools in suicide prevention for improving counselor support. Despite the lack of significant impact on self-efficacy, the support tool provided usable suggestions and the frequent use during long conversations suggests counsellors may wish to use the tool in complex or challenging interactions.
Collapse
Affiliation(s)
- Salim Salmi
- Research, 113 Suicide Prevention, Paasheuvelweg 25, 1105 BP Amsterdam-Zuidoost, Netherlands; Stochastics, Centrum Wiskunde & Informatica, P.O. Box 94079, 1090 GB Amsterdam, Netherlands; Mathematics, Faculty of Science, Vrije Universiteit Amsterdam, De Boelelaan 1105, 1081 HV Amsterdam, Netherlands.
| | - Saskia Mérelle
- Research, 113 Suicide Prevention, Paasheuvelweg 25, 1105 BP Amsterdam-Zuidoost, Netherlands
| | - Nikki van Eijk
- Research, 113 Suicide Prevention, Paasheuvelweg 25, 1105 BP Amsterdam-Zuidoost, Netherlands
| | - Renske Gilissen
- Research, 113 Suicide Prevention, Paasheuvelweg 25, 1105 BP Amsterdam-Zuidoost, Netherlands
| | - Rob van der Mei
- Stochastics, Centrum Wiskunde & Informatica, P.O. Box 94079, 1090 GB Amsterdam, Netherlands
| | - Sandjai Bhulai
- Mathematics, Faculty of Science, Vrije Universiteit Amsterdam, De Boelelaan 1105, 1081 HV Amsterdam, Netherlands
| |
Collapse
|
15
|
Chan CKY. AI as the Therapist: Student Insights on the Challenges of Using Generative AI for School Mental Health Frameworks. Behav Sci (Basel) 2025; 15:287. [PMID: 40150182 PMCID: PMC11939552 DOI: 10.3390/bs15030287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2025] [Revised: 02/24/2025] [Accepted: 02/25/2025] [Indexed: 03/29/2025] Open
Abstract
The integration of generative AI (GenAI) in school-based mental health services presents new opportunities and challenges. This study focuses on the challenges of using GenAI chatbots as therapeutic tools by exploring secondary school students' perceptions of such applications. The data were collected from students who had both theoretical and practical experience with GenAI. Based on Grodniewicz and Hohol's framework highlighting the "Problem of a Confused Therapist", "Problem of a Non-human Therapist", and "Problem of a Narrowly Intelligent Therapist", qualitative data from student reflections were examined using thematic analysis. The findings revealed that while students acknowledged AI's benefits, such as accessibility and non-judgemental feedback, they expressed significant concerns about a lack of empathy, trust, and adaptability. The implications underscore the need for AI chatbot use to be complemented by in-person counselling, emphasising the importance of human oversight in AI-augmented mental health care. This study contributes to a deeper understanding of how advanced AI can be ethically and effectively incorporated into school mental health frameworks, balancing technological potential with essential human interaction.
Collapse
|
16
|
Hasei J, Hanzawa M, Nagano A, Maeda N, Yoshida S, Endo M, Yokoyama N, Ochi M, Ishida H, Katayama H, Fujiwara T, Nakata E, Nakahara R, Kunisada T, Tsukahara H, Ozaki T. Empowering pediatric, adolescent, and young adult patients with cancer utilizing generative AI chatbots to reduce psychological burden and enhance treatment engagement: a pilot study. Front Digit Health 2025; 7:1543543. [PMID: 40070545 PMCID: PMC11893593 DOI: 10.3389/fdgth.2025.1543543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2024] [Accepted: 02/13/2025] [Indexed: 03/14/2025] Open
Abstract
Background Pediatric and adolescent/young adult (AYA) cancer patients face profound psychological challenges, exacerbated by limited access to continuous mental health support. While conventional therapeutic interventions often follow structured protocols, the potential of generative artificial intelligence (AI) chatbots to provide continuous conversational support remains unexplored. This study evaluates the feasibility and impact of AI chatbots in alleviating psychological distress and enhancing treatment engagement in this vulnerable population. Methods Two age-appropriate AI chatbots, leveraging GPT-4, were developed to provide natural, empathetic conversations without structured therapeutic protocols. Five pediatric and AYA cancer patients participated in a two-week intervention, engaging with the chatbots via a messaging platform. Pre- and post-intervention anxiety and stress levels were self-reported, and usage patterns were analyzed to assess the chatbots' effectiveness. Results Four out of five participants reported significant reductions in anxiety and stress levels post-intervention. Participants engaged with the chatbot every 2-3 days, with sessions lasting approximately 10 min. All participants noted improved treatment motivation, with 80% disclosing personal concerns to the chatbot they had not shared with healthcare providers. The 24/7 availability particularly benefited patients experiencing nighttime anxiety. Conclusions This pilot study demonstrates the potential of generative AI chatbots to complement traditional mental health services by addressing unmet psychological needs in pediatric and AYA cancer patients. The findings suggest these tools can serve as accessible, continuous support systems. Further large-scale studies are warranted to validate these promising results.
Collapse
Affiliation(s)
- Joe Hasei
- Department of Medical Information and Assistive Technology Development, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Mana Hanzawa
- Department of Pediatrics, Okayama University Hospital, Okayama, Japan
| | - Akihito Nagano
- Department of Orthopedic Surgery, Gifu University Graduate School of Medicine, Gifu, Japan
| | - Naoko Maeda
- Department of Pediatrics, NHO National Hospital Organization Nagoya Medical Center, Nagoya, Japan
| | - Shinichirou Yoshida
- Department of Orthopedic Surgery, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Makoto Endo
- Department of Orthopedic Surgery, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Nobuhiko Yokoyama
- Department of Orthopedic Surgery, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Motoharu Ochi
- Department of Pediatrics, Okayama University Hospital, Okayama, Japan
| | - Hisashi Ishida
- Department of Pediatrics, Okayama University Hospital, Okayama, Japan
| | - Hideki Katayama
- Department of Palliative and Supportive Care, Okayama University Hospital, Okayama, Japan
| | - Tomohiro Fujiwara
- Science of Functional Recovery and Reconstruction, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Eiji Nakata
- Science of Functional Recovery and Reconstruction, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Ryuichi Nakahara
- Science of Functional Recovery and Reconstruction, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Toshiyuki Kunisada
- Science of Functional Recovery and Reconstruction, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Hirokazu Tsukahara
- Department of Pediatrics, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Toshifumi Ozaki
- Science of Functional Recovery and Reconstruction, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| |
Collapse
|
17
|
Rahsepar Meadi M, Sillekens T, Metselaar S, van Balkom A, Bernstein J, Batelaan N. Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review. JMIR Ment Health 2025; 12:e60432. [PMID: 39983102 PMCID: PMC11890142 DOI: 10.2196/60432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 12/21/2024] [Accepted: 12/23/2024] [Indexed: 02/23/2025] Open
Abstract
BACKGROUND Conversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such as psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns. OBJECTIVE We aimed to provide a comprehensive overview of ethical considerations surrounding CAI as a therapist for individuals with mental health issues. METHODS We conducted a systematic search across PubMed, Embase, APA PsycINFO, Web of Science, Scopus, the Philosopher's Index, and ACM Digital Library databases. Our search comprised 3 elements: embodied artificial intelligence, ethics, and mental health. We defined CAI as a conversational agent that interacts with a person and uses artificial intelligence to formulate output. We included articles discussing the ethical challenges of CAI functioning in the role of a therapist for individuals with mental health issues. We added additional articles through snowball searching. We included articles in English or Dutch. All types of articles were considered except abstracts of symposia. Screening for eligibility was done by 2 independent researchers (MRM and TS or AvB). An initial charting form was created based on the expected considerations and revised and complemented during the charting process. The ethical challenges were divided into themes. When a concern occurred in more than 2 articles, we identified it as a distinct theme. RESULTS We included 101 articles, of which 95% (n=96) were published in 2018 or later. Most were reviews (n=22, 21.8%) followed by commentaries (n=17, 16.8%). The following 10 themes were distinguished: (1) safety and harm (discussed in 52/101, 51.5% of articles); the most common topics within this theme were suicidality and crisis management, harmful or wrong suggestions, and the risk of dependency on CAI; (2) explicability, transparency, and trust (n=26, 25.7%), including topics such as the effects of "black box" algorithms on trust; (3) responsibility and accountability (n=31, 30.7%); (4) empathy and humanness (n=29, 28.7%); (5) justice (n=41, 40.6%), including themes such as health inequalities due to differences in digital literacy; (6) anthropomorphization and deception (n=24, 23.8%); (7) autonomy (n=12, 11.9%); (8) effectiveness (n=38, 37.6%); (9) privacy and confidentiality (n=62, 61.4%); and (10) concerns for health care workers' jobs (n=16, 15.8%). Other themes were discussed in 9.9% (n=10) of the identified articles. CONCLUSIONS Our scoping review has comprehensively covered ethical aspects of CAI in mental health care. While certain themes remain underexplored and stakeholders' perspectives are insufficiently represented, this study highlights critical areas for further research. These include evaluating the risks and benefits of CAI in comparison to human therapists, determining its appropriate roles in therapeutic contexts and its impact on care access, and addressing accountability. Addressing these gaps can inform normative analysis and guide the development of ethical guidelines for responsible CAI use in mental health care.
Collapse
Affiliation(s)
- Mehrdad Rahsepar Meadi
- Department of Psychiatry, Amsterdam Public Health, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- Department of Ethics, Law, & Humanities, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Tomas Sillekens
- GGZ Centraal Mental Health Care, Amersfoort, The Netherlands
| | - Suzanne Metselaar
- Department of Ethics, Law, & Humanities, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Anton van Balkom
- Department of Psychiatry, Amsterdam Public Health, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Justin Bernstein
- Department of Philosophy, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Neeltje Batelaan
- Department of Psychiatry, Amsterdam Public Health, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
18
|
Kling M, Haeussl A, Dalkner N, Fellendorf FT, Lenger M, Finner A, Ilic J, Smolak IS, Stojec L, Zwigl I, Reininghaus EZ. Social robots in adult psychiatry: a summary of utilisation and impact. Front Psychiatry 2025; 16:1506776. [PMID: 40007891 PMCID: PMC11850358 DOI: 10.3389/fpsyt.2025.1506776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/06/2024] [Accepted: 01/20/2025] [Indexed: 02/27/2025] Open
Abstract
Social robots are increasingly becoming more prevalent in healthcare, including nursing, geriatric care, and treatment for children on the autism spectrum. Their assistance is believed to hold promise in mitigating the effects of staffing shortages and enhancing current mental health treatment. Nevertheless, the application of social robotics in psychiatry remains restricted and controversial. This scoping review aims to provide an overview of the literature on social robots in adult psychiatry concerning their use, effects, and acceptability. We conducted a literature search, including PubMed and PsycINFO, to identify literature on robot interventions for adult psychiatric patients. Methodological quality was assessed using the 'Mixed Methods Appraisal Tool'. Usage and target variables were unique to every included original study (N = 7) and suggested a wide range of possible implications for patient treatment and care. Social robots were used to reduce symptoms, improve functioning, and gain insights into characteristic features of specific mental health conditions. The included studies were concerned with the following diagnoses: schizophrenia (N = 3), autism spectrum disorder (N = 2), and intellectual disability (N = 2). The sample sizes were too small to generalise the outcome, but overall trends showed some positive effects on the selected symptoms. Observations and participant feedback suggested high acceptance and enjoyment of the users. Although the evidence regarding the benefits of robotic interventions in adult psychiatry is still low, it suffices to assume that investing in larger, randomised, and controlled trials is worthwhile and promising. Systematic review registration asprediced.org, identifier 128766.
Collapse
Affiliation(s)
| | - Alfred Haeussl
- Division of Psychiatry and Psychotherapeutic Medicine, Medical University of
Graz, Graz, Austria
| | | | | | | | | | | | | | | | | | | |
Collapse
|
19
|
Azmi S, Kunnathodi F, Alotaibi HF, Alhazzani W, Mustafa M, Ahmad I, Anvarbatcha R, Lytras MD, Arafat AA. Harnessing Artificial Intelligence in Obesity Research and Management: A Comprehensive Review. Diagnostics (Basel) 2025; 15:396. [PMID: 39941325 PMCID: PMC11816645 DOI: 10.3390/diagnostics15030396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2024] [Revised: 01/05/2025] [Accepted: 01/31/2025] [Indexed: 02/16/2025] Open
Abstract
Purpose: This review aims to explore the clinical and research applications of artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL), in understanding, predicting, and managing obesity. It assesses the use of AI tools to identify obesity-related risk factors, predict outcomes, personalize treatments, and improve healthcare interventions for obesity. Methods: A comprehensive literature search was conducted using PubMed and Google Scholar, with keywords including "artificial intelligence", "machine learning", "deep learning", "obesity", "obesity management", and related terms. Studies focusing on AI's role in obesity research, management, and therapeutic interventions were reviewed, including observational studies, systematic reviews, and clinical applications. Results: This review identifies numerous AI-driven models, such as ML and DL, used in obesity prediction, patient stratification, and personalized management strategies. Applications of AI in obesity research include risk prediction, early detection, and individualization of treatment plans. AI has facilitated the development of predictive models utilizing various data sources, such as genetic, epigenetic, and clinical data. However, AI models vary in effectiveness, influenced by dataset type, research goals, and model interpretability. Performance metrics such as accuracy, precision, recall, and F1-score were evaluated to optimize model selection. Conclusions: AI offers promising advancements in obesity management, enabling more personalized and efficient care. While technology presents considerable potential, challenges such as data quality, ethical considerations, and technical requirements remain. Addressing these will be essential to fully harness AI's potential in obesity research and treatment, supporting a shift toward precision healthcare.
Collapse
Affiliation(s)
- Sarfuddin Azmi
- Scientific Research Center, Al Hussain bin Ali Street, Ministry of Defense Health Services, Riyadh 12485, Saudi Arabia; (S.A.); (F.K.); (H.F.A.); (W.A.); (M.M.); (I.A.); (R.A.)
| | - Faisal Kunnathodi
- Scientific Research Center, Al Hussain bin Ali Street, Ministry of Defense Health Services, Riyadh 12485, Saudi Arabia; (S.A.); (F.K.); (H.F.A.); (W.A.); (M.M.); (I.A.); (R.A.)
| | - Haifa F. Alotaibi
- Scientific Research Center, Al Hussain bin Ali Street, Ministry of Defense Health Services, Riyadh 12485, Saudi Arabia; (S.A.); (F.K.); (H.F.A.); (W.A.); (M.M.); (I.A.); (R.A.)
- Department of Family Medicine, Prince Sultan Military Medical City, Riyadh 11159, Saudi Arabia
| | - Waleed Alhazzani
- Scientific Research Center, Al Hussain bin Ali Street, Ministry of Defense Health Services, Riyadh 12485, Saudi Arabia; (S.A.); (F.K.); (H.F.A.); (W.A.); (M.M.); (I.A.); (R.A.)
- Critical Care and Internal Medicine Department, College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam 31441, Saudi Arabia
| | - Mohammad Mustafa
- Scientific Research Center, Al Hussain bin Ali Street, Ministry of Defense Health Services, Riyadh 12485, Saudi Arabia; (S.A.); (F.K.); (H.F.A.); (W.A.); (M.M.); (I.A.); (R.A.)
| | - Ishtiaque Ahmad
- Scientific Research Center, Al Hussain bin Ali Street, Ministry of Defense Health Services, Riyadh 12485, Saudi Arabia; (S.A.); (F.K.); (H.F.A.); (W.A.); (M.M.); (I.A.); (R.A.)
| | - Riyasdeen Anvarbatcha
- Scientific Research Center, Al Hussain bin Ali Street, Ministry of Defense Health Services, Riyadh 12485, Saudi Arabia; (S.A.); (F.K.); (H.F.A.); (W.A.); (M.M.); (I.A.); (R.A.)
| | - Miltiades D. Lytras
- Computer Science Department, College of Engineering, Effat University, Jeddah 21478, Saudi Arabia;
- Department of Management, School of Business and Economics, The American College of Greece, 15342 Athens, Greece
| | - Amr A. Arafat
- Scientific Research Center, Al Hussain bin Ali Street, Ministry of Defense Health Services, Riyadh 12485, Saudi Arabia; (S.A.); (F.K.); (H.F.A.); (W.A.); (M.M.); (I.A.); (R.A.)
- Departments of Adult Cardiac Surgery, Prince Sultan Cardiac Center, Riyadh 31982, Saudi Arabia
| |
Collapse
|
20
|
Haber Y, Hadar Shoval D, Levkovich I, Yinon D, Gigi K, Pen O, Angert T, Elyoseph Z. The externalization of internal experiences in psychotherapy through generative artificial intelligence: a theoretical, clinical, and ethical analysis. Front Digit Health 2025; 7:1512273. [PMID: 39968063 PMCID: PMC11832678 DOI: 10.3389/fdgth.2025.1512273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Accepted: 01/14/2025] [Indexed: 02/20/2025] Open
Abstract
Introduction Externalization techniques are well established in psychotherapy approaches, including narrative therapy and cognitive behavioral therapy. These methods elicit internal experiences such as emotions and make them tangible through external representations. Recent advances in generative artificial intelligence (GenAI), specifically large language models (LLMs), present new possibilities for therapeutic interventions; however, their integration into core psychotherapy practices remains largely unexplored. This study aimed to examine the clinical, ethical, and theoretical implications of integrating GenAI into the therapeutic space through a proof-of-concept (POC) of AI-driven externalization techniques, while emphasizing the essential role of the human therapist. Methods To this end, we developed two customized GPTs agents: VIVI (visual externalization), which uses DALL-E 3 to create images reflecting patients' internal experiences (e.g., depression or hope), and DIVI (dialogic role-play-based externalization), which simulates conversations with aspects of patients' internal content. These tools were implemented and evaluated through a clinical case study under professional psychological guidance. Results The integration of VIVI and DIVI demonstrated that GenAI can serve as an "artificial third", creating a Winnicottian playful space that enhances, rather than supplants, the dyadic therapist-patient relationship. The tools successfully externalized complex internal dynamics, offering new therapeutic avenues, while also revealing challenges such as empathic failures and cultural biases. Discussion These findings highlight both the promise and the ethical complexities of AI-enhanced therapy, including concerns about data security, representation accuracy, and the balance of clinical authority. To address these challenges, we propose the SAFE-AI protocol, offering clinicians structured guidelines for responsible AI integration in therapy. Future research should systematically evaluate the generalizability, efficacy, and ethical implications of these tools across diverse populations and therapeutic contexts.
Collapse
Affiliation(s)
- Yuval Haber
- The Program of Hermeneutics and Cultural Studies, Interdisciplinary Studies Unit, Bar-Ilan University, Jerusalem, Israel
| | - Dorit Hadar Shoval
- Department of Psychology, Max Stern Academic College of Emek Yezreel, Yezreel Valley, Israel
| | - Inbar Levkovich
- Faculty of Education, Tel-Hai Academic College, Kiryat Shmona, Israel
| | - Dror Yinon
- The Program of Hermeneutics and Cultural Studies, Interdisciplinary Studies Unit, Bar-Ilan University, Jerusalem, Israel
| | - Karny Gigi
- Department of Counseling and Human Development, Faculty of Education, University of Haifa, Haifa, Israel
| | - Oori Pen
- Department of Counseling and Human Development, Faculty of Education, University of Haifa, Haifa, Israel
| | - Tal Angert
- Department of Counseling and Human Development, Faculty of Education, University of Haifa, Haifa, Israel
| | - Zohar Elyoseph
- Department of Counseling and Human Development, Faculty of Education, University of Haifa, Haifa, Israel
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, United Kingdom
| |
Collapse
|
21
|
Abstract
This article explores the ethical issues arising from ordinary AI applications currently used in mental health care, rather than speculative future scenarios. AI tools are already in use for a variety of purposes, including data collection for screening and intake, documentation, decision support, non-clinical support, and, in limited cases, adjunctive treatment. After reviewing the range of and distinctions between those applications, including when those distinctions become blurred, the article discusses selected ethical considerations. The use of AI in psychiatry raises issues related to reflective practice, the seductive allure of AI, varieties of bias, data security, and liability. These examples highlight how seemingly simple AI applications can still present significant ethical implications, suggesting practical considerations for clinicians, professional organizations, treatment organizations, training programs, and policymakers.
Collapse
Affiliation(s)
- Carl E Fisher
- Department of Psychiatry, Columbia University Irving Medical Center, Columbia University, New York, NY, USA
| |
Collapse
|
22
|
Ogunwale A, Smith A, Fakorede O, Ogunlesi AO. Artificial intelligence and forensic mental health in Africa: a narrative review. Int Rev Psychiatry 2025; 37:3-13. [PMID: 40035373 DOI: 10.1080/09540261.2024.2405174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Accepted: 09/12/2024] [Indexed: 03/05/2025]
Abstract
This narrative review examines the integration of Artificial Intelligence (AI) tools into forensic psychiatry in Africa, highlighting possible opportunities and challenges. Specifically, AI may have the potential to augment screening in prisons, risk assessment/management, and forensic-psychiatric treatment, alongside offering benefits for training and research purposes. These use-cases may be particularly advantageous in contexts of forensic practice in Africa, where there remains a need for capacity building and service improvements in jurisdictions affected by distinctive sociolegal and socioeconomic challenges. However, AI can also entail ethical risks associated with misinformation, privacy concerns, and an overreliance on automated systems that need to be considered within implementation and policy planning. Equally, the political and regulatory backdrop surrounding AI in countries in Africa needs to be carefully scrutinised (and, where necessary, strengthened). Accordingly, this review calls for rigorous feasibility studies and the development of training programmes to ensure the effective application of AI in enhancing forensic-psychiatric services in Africa.
Collapse
Affiliation(s)
- A Ogunwale
- Forensic Unit, Department of Clinical Services, Neuropsychiatric Hospital, Aro, Abeokuta, Nigeria
- Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology and Neuroscience, King's College, London, UK
| | - A Smith
- Department of Forensic Psychiatry, University of Bern, Bern, Switzerland
| | - O Fakorede
- Department of Mental Health & Behavioural Medicine, Federal Medical Centre, Abeokuta, Nigeria
| | - A O Ogunlesi
- Retired forensic psychiatrist/former Provost/Medical Director, Neuropsychiatric Hospital, Abeokuta, Nigeria
| |
Collapse
|
23
|
Lee HS, Wright C, Ferranto J, Buttimer J, Palmer CE, Welchman A, Mazor KM, Fisher KA, Smelson D, O’Connor L, Fahey N, Soni A. Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop. Front Psychiatry 2025; 15:1505024. [PMID: 39957757 PMCID: PMC11826059 DOI: 10.3389/fpsyt.2024.1505024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Accepted: 12/26/2024] [Indexed: 02/18/2025] Open
Abstract
Background Digital mental health interventions, such as artificial intelligence (AI) conversational agents, hold promise for improving access to care by innovating therapy and supporting delivery. However, little research exists on patient perspectives regarding AI conversational agents, which is crucial for their successful implementation. This study aimed to fill the gap by exploring patients' perceptions and acceptability of AI conversational agents in mental healthcare. Methods Adults with self-reported mild to moderate anxiety were recruited from the UMass Memorial Health system. Participants engaged in semi-structured interviews to discuss their experiences, perceptions, and acceptability of AI conversational agents in mental healthcare. Anxiety levels were assessed using the Generalized Anxiety Disorder scale. Data were collected from December 2022 to February 2023, and three researchers conducted rapid qualitative analysis to identify and synthesize themes. Results The sample included 29 adults (ages 19-66), predominantly under age 35, non-Hispanic, White, and female. Participants reported a range of positive and negative experiences with AI conversational agents. Most held positive attitudes towards AI conversational agents, appreciating their utility and potential to increase access to care, yet some also expressed cautious optimism. About half endorsed negative opinions, citing AI's lack of empathy, technical limitations in addressing complex mental health situations, and data privacy concerns. Most participants desired some human involvement in AI-driven therapy and expressed concern about the risk of AI conversational agents being seen as replacements for therapy. A subgroup preferred AI conversational agents for administrative tasks rather than care provision. Conclusions AI conversational agents were perceived as useful and beneficial for increasing access to care, but concerns about AI's empathy, capabilities, safety, and human involvement in mental healthcare were prevalent. Future implementation and integration of AI conversational agents should consider patient perspectives to enhance their acceptability and effectiveness.
Collapse
Affiliation(s)
- Hyein S. Lee
- Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
- Department of Population and Quantitative Health Sciences, University of Massachusetts Chan Medical School, Worcester, MA, United States
| | - Colton Wright
- Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
| | - Julia Ferranto
- Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
| | | | | | | | - Kathleen M. Mazor
- Division of Health System Science, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
| | - Kimberly A. Fisher
- Division of Health System Science, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
| | - David Smelson
- Division of Health System Science, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
| | - Laurel O’Connor
- Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
- Department of Emergency Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
| | - Nisha Fahey
- Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
- Department of Pediatrics, University of Massachusetts Chan Medical School, Worcester, MA, United States
| | - Apurv Soni
- Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
- Department of Population and Quantitative Health Sciences, University of Massachusetts Chan Medical School, Worcester, MA, United States
- Division of Health System Science, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
| |
Collapse
|
24
|
Zheng A, Long L, Govathson C, Chetty-Makkan C, Morris S, Rech D, Fox MP, Pascoe S. Designing AI-powered healthcare assistants to effectively reach vulnerable populations with health care services: A discrete choice experiment among South African university students. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2025:2025.01.30.25321409. [PMID: 39974107 PMCID: PMC11838649 DOI: 10.1101/2025.01.30.25321409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
Introduction South African young adults are at increased risk for HIV acquisition and other non-communicable diseases and face significant barriers to accessing healthcare services. The rapid development of artificial intelligence (AI), in particular AI-powered healthcare assistants (AIPHA), presents a unique opportunity to increase access to health information and linkage to healthcare services and providers. While successful implementation and uptake of such tools require understanding user preferences, limited understanding of these preferences exist. We sought to understand what preferences are important to university students in South Africa when engaging with a hypothetical AIPHA to access health information using a discrete choice experiment. Methods We conducted an unlabeled, forced choice discrete choice experiment among adult South African university students through Prolific Academic, an online research platform, in 2024. Each choice option described a hypothetical AIPHA using eight attribute characteristics (cost, confidentiality, security, healthcare topics, language, persona, access, services). Participants were presented with ten choice sets each comprised of two choice options and asked to choose between the two. A conditional logit model was used to evaluate preferences. Results 300 participants were recruited and enrolled. Most participants were Black, born in South Africa, heterosexual, working for a wage, and a mean age of 26.5 years (SD: 6.0). Results from the discrete choice experiment identified that language, security, and receiving personally tailored advice were the most important attributes for AIPHA. Participants strongly preferred the ability to communicate with the AIPHA in any South African language of their choosing instead of only English and to receive information about health topics specific to their context including information on clinics geographically near them. Results were consistent when stratified by sex and socioeconomic status. Conclusions Participants had strong preferences for security and language which is in line with previous studies where successful uptake and implementation of such health interventions clearly addressed these concerns. These results build the evidence base for how we might engage young adults in healthcare through technology effectively.
Collapse
|
25
|
Isop WA. A conceptual ethical framework to preserve natural human presence in the use of AI systems in education. Front Artif Intell 2025; 7:1377938. [PMID: 39906499 PMCID: PMC11790613 DOI: 10.3389/frai.2024.1377938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Accepted: 12/18/2024] [Indexed: 02/06/2025] Open
Abstract
In recent years, there has been a remarkable increase of interest in the ethical use of AI systems in education. On one hand, the potential for such systems is undeniable. Used responsibly, they can meaningfully support and enhance the interactive process of teaching and learning. On the other hand, there is a risk that natural human presence may be gradually replaced by arbitrarily created AI systems, particularly due to their rapidly increasing yet partially unguided capabilities. State-of-the-art ethical frameworks suggest high-level principles, requirements, and guidelines, but lack detailed low-level models of concrete processes and according properties of the involved actors in education. In response, this article introduces a detailed Unified Modeling Language (UML)-based ancillary framework that includes a novel set of low-level properties. Whilst not incorporated in related work, particularly the ethical behavior and visual representation of the actors are intended to improve transparency and reduce the potential for misinterpretation and misuse of AIS. The framework primarily focuses on school education, resulting in a more restrictive model, however, reflects on potentials and challenges in terms of improving flexibility toward different educational levels. The article concludes with a discussion of key findings and implications of the presented framework, its limitations, and potential future research directions to sustainably preserve natural human presence in the use of AI systems in education.
Collapse
|
26
|
Kim M, Lee S, Kim S, Heo JI, Lee S, Shin YB, Cho CH, Jung D. Therapeutic Potential of Social Chatbots in Alleviating Loneliness and Social Anxiety: Quasi-Experimental Mixed Methods Study. J Med Internet Res 2025; 27:e65589. [PMID: 39808786 PMCID: PMC11775481 DOI: 10.2196/65589] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2024] [Revised: 11/04/2024] [Accepted: 11/27/2024] [Indexed: 01/16/2025] Open
Abstract
BACKGROUND Artificial intelligence (AI) social chatbots represent a major advancement in merging technology with mental health, offering benefits through natural and emotional communication. Unlike task-oriented chatbots, social chatbots build relationships and provide social support, which can positively impact mental health outcomes like loneliness and social anxiety. However, the specific effects and mechanisms through which these chatbots influence mental health remain underexplored. OBJECTIVE This study explores the mental health potential of AI social chatbots, focusing on their impact on loneliness and social anxiety among university students. The study seeks to (i) assess the impact of engaging with an AI social chatbot in South Korea, "Luda Lee," on these mental health outcomes over a 4-week period and (ii) analyze user experiences to identify perceived strengths and weaknesses, as well as the applicability of social chatbots in therapeutic contexts. METHODS A single-group pre-post study was conducted with university students who interacted with the chatbot for 4 weeks. Measures included loneliness, social anxiety, and mood-related symptoms such as depression, assessed at baseline, week 2, and week 4. Quantitative measures were analyzed using analysis of variance and stepwise linear regression to identify the factors affecting change. Thematic analysis was used to analyze user experiences and assess the perceived benefits and challenges of chatbots. RESULTS A total of 176 participants (88 males, average age=22.6 (SD 2.92)) took part in the study. Baseline measures indicated slightly elevated levels of loneliness (UCLA Loneliness Scale, mean 27.97, SD (11.07)) and social anxiety (Liebowitz Social Anxiety Scale, mean 25.3, SD (14.19)) compared to typical university students. Significant reductions were observed as loneliness decreasing by week 2 (t175=2.55, P=.02) and social anxiety decreasing by week 4 (t175=2.67, P=.01). Stepwise linear regression identified baseline loneliness (β=0.78, 95% CI 0.67 to 0.89), self-disclosure (β=-0.65, 95% CI -1.07 to -0.23) and resilience (β=0.07, 95% CI 0.01 to 0.13) as significant predictors of week 4 loneliness (R2=0.64). Baseline social anxiety (β=0.92, 95% CI 0.81 to 1.03) significantly predicted week 4 anxiety (R2=0.65). These findings indicate higher baseline loneliness, lower self-disclosure to the chatbot, and higher resilience significantly predicted higher loneliness at week 4. Additionally, higher baseline social anxiety significantly predicted higher social anxiety at week 4. Qualitative analysis highlighted the chatbot's empathy and support as features for reliability, though issues such as inconsistent responses and excessive enthusiasm occasionally disrupted user immersion. CONCLUSIONS Social chatbots may have the potential to mitigate feelings of loneliness and social anxiety, indicating their possible utility as complementary resources in mental health interventions. User insights emphasize the importance of empathy, accessibility, and structured conversations in achieving therapeutic goals. TRIAL REGISTRATION Clinical Research Information Service (CRIS) KCT0009288; https://tinyurl.com/hxrznt3t.
Collapse
Affiliation(s)
- Myungsung Kim
- Graduate School of Health Science and Technology, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
| | - Seonmi Lee
- Graduate School of Health Science and Technology, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
| | - Sieun Kim
- Graduate School of Health Science and Technology, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
| | - Jeong-In Heo
- Graduate School of Health Science and Technology, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
| | - Sangil Lee
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
| | - Yu-Bin Shin
- Department of Psychiatry, Korea University College of Medicine, Seoul, Republic of Korea
| | - Chul-Hyun Cho
- Department of Psychiatry, Korea University College of Medicine, Seoul, Republic of Korea
- Department of Biomedical Informatics, Korea University College of Medicine, Seoul, Republic of Korea
| | - Dooyoung Jung
- Graduate School of Health Science and Technology, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
| |
Collapse
|
27
|
Lu E, Zhang D, Han M, Wang S, He L. The application of artificial intelligence in insomnia, anxiety, and depression: A bibliometric analysis. Digit Health 2025; 11:20552076251324456. [PMID: 40035038 PMCID: PMC11873874 DOI: 10.1177/20552076251324456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2024] [Accepted: 02/11/2025] [Indexed: 03/05/2025] Open
Abstract
Background Mental health issues like insomnia, anxiety, and depression have increased significantly. Artificial intelligence (AI) has shown promise in diagnosing and providing personalized treatment. Objective This study aims to systematically review the application of AI in addressing insomnia, anxiety, and depression, identifying key research hotspots, and forecasting future trends through bibliometric analysis. Methods We analyzed a total of 875 articles from the Web of Science Core Collection (2000-2024) using bibliometric tools such as VOSviewer and CiteSpace. These tools were used to map research trends, highlight international collaboration, and examine the contributions of leading countries, institutions, and authors in the field. Results The United States and China lead the field in terms of research output and collaborations. Key research areas include "neural networks," "machine learning," "deep learning," and "human-robot interaction," particularly in relation to personalized treatment approaches. However, challenges around data privacy, ethical concerns, and the interpretability of AI models need to be addressed. Conclusions This study highlights the growing role of AI in mental health research and identifies future priorities, such as improving data quality, addressing ethical challenges, and integrating AI more seamlessly into clinical practice. These advancements will be crucial in addressing the global mental health crisis.
Collapse
Affiliation(s)
- Enshi Lu
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Di Zhang
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Mingguang Han
- School of Mathematical Sciences, Peking University, Beijing, China
| | - Shihua Wang
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Liyun He
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| |
Collapse
|
28
|
Khan A, Galea S, Mendez I. Five steps for the deployment of artificial intelligence-driven healthcare delivery for remote and indigenous populations in Canada. Digit Health 2025; 11:20552076251334422. [PMID: 40290270 PMCID: PMC12033449 DOI: 10.1177/20552076251334422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2025] [Accepted: 03/27/2025] [Indexed: 04/30/2025] Open
Abstract
The integration of artificial intelligence (AI) into healthcare delivery offers transformative potential, especially for remote and underserved populations. In rural and remote regions like northern Saskatchewan, Canada, where Indigenous communities face elevated rates of chronic conditions such as diabetes and limited access to healthcare, AI-driven virtual care can bridge critical gaps. However, a universal approach falls short of addressing the unique needs of diverse populations. This communication outlines a five-step framework to guide AI-facilitated healthcare delivery tailored to community-specific demographics and clinical priorities. Steps include building comprehensive community profiles, assessing digital readiness, prioritizing healthcare needs, deploying culturally sensitive virtual care programs, and evaluating outcomes with AI-powered analytics. By leveraging AI in a systematic and inclusive manner, this approach addresses social determinants of health, improves equity, and enhances healthcare quality, offering a scalable model to improve health outcomes in geographically and demographically diverse settings.
Collapse
Affiliation(s)
- Amal Khan
- Department of Community Health and Epidemiology, University of Saskatchewan, Saskatoon, SK, Canada
- Remote Medicine and Robotics, Virtual Health Hub, Saskatoon, SK, Canada
| | - Sandro Galea
- Margaret C. Ryan Dean of the School of Public Health, Washington University in St Louis, St Louis, MO, USA
| | - Ivar Mendez
- Remote Medicine and Robotics, Virtual Health Hub, Saskatoon, SK, Canada
- Department of Surgery, University of Saskatchewan, Saskatoon, SK, Canada
| |
Collapse
|
29
|
Khosravi M, Alzahrani AA, Muhammed TM, Hjazi A, Abbas HH, AbdRabou MA, Mohmmed KH, Ghildiyal P, Yumashev A, Elawady A, Sarabandi S. Management of Refractory Functional Gastrointestinal Disorders: What Role Should Psychiatrists Have? PHARMACOPSYCHIATRY 2025; 58:14-24. [PMID: 38897220 DOI: 10.1055/a-2331-7684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
Currently, it has been stated that psychiatric and psychological problems are equally paramount aspects of the clinical modulation and manifestation of both the central nervous and digestive systems, which could be used to restore balance. The present narrative review aims to provide an elaborate description of the bio-psycho-social facets of refractory functional gastrointestinal disorders, psychiatrists' role, specific psychiatric approach, and the latest psychiatric and psychological perspectives on practical therapeutic management. In this respect, "psyche," "psychiatry," "psychology," "psychiatrist," "psychotropic," and "refractory functional gastrointestinal disorders" (as the keywords) were searched in relevant English publications from January 1, 1950, to March 1, 2024, in the PubMed, Web of Science, Scopus, EMBASE, Cochrane Library, and Google Scholar databases. Eventually, the narrative technique was adopted to reach a compelling story with a high level of cohesion through material synthesis. The current literature recognizes the brain-gut axis modulation as a therapeutic target for refractory functional gastrointestinal disorders and the bio-psycho-social model as an integrated framework to explain disease pathogenesis. The results also reveal some evidence to affirm the benefits of psychotropic medications and psychological therapies in refractory functional gastrointestinal disorders, even when psychiatric symptoms were absent. It seems that psychiatrists are required to pay higher levels of attention to both the assessment and treatment of patients with refractory functional gastrointestinal disorders, accompanied by educating and training practitioners who take care of these patients.
Collapse
Affiliation(s)
- Mohsen Khosravi
- Department of Psychiatry, School of Medicine, Zahedan University of Medical Sciences, Zahedan, Iran
- Health Promotion Research Center, Zahedan University of Medical Sciences, Zahedan, Iran
| | | | - Thikra M Muhammed
- Department of Biotechnology, College of Applied Sciences, University of Fallujah, Al-anbar, Iraq
| | - Ahmed Hjazi
- Department of Medical Laboratory, College of Applied Medical Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Huda H Abbas
- National University of Science and Technology, Dhi Qar, Iraq
| | - Mervat A AbdRabou
- Department of Biology, College of Science, Jouf University, Sakaka, Saudi Arabia
| | | | - Pallavi Ghildiyal
- Uttaranchal Institute of Pharmaceutical Sciences, Uttaranchal University, Dehradun, India
| | - Alexey Yumashev
- Department of Prosthetic Dentistry, Sechenov First Moscow State Medical University, Moscow, Russia
| | - Ahmed Elawady
- College of technical engineering, the Islamic University, Najaf, Iraq
- College of technical engineering, the Islamic University of Al Diwaniyah, Al Diwaniyah, Iraq
- College of technical engineering, the Islamic University of Babylon, Babylon, Iraq
| | - Sahel Sarabandi
- Department of Clinical Biochemistry, Faculty of Medical Sciences, Tarbiat Modares University, Tehran, Iran
| |
Collapse
|
30
|
Arnout BA, Alshehri SM. Causal Relationships Between the Use of AI, Therapeutic Alliance, and Job Engagement Among Psychological Service Practitioners. Behav Sci (Basel) 2024; 15:21. [PMID: 39851825 PMCID: PMC11762742 DOI: 10.3390/bs15010021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2024] [Revised: 12/21/2024] [Accepted: 12/23/2024] [Indexed: 01/26/2025] Open
Abstract
Despite the significant increase in studies on AI applications in many aspects of life, its applications in mental health services still require further studies. This study aimed to test a proposed structural model of the relationships between AI use, therapeutic alliance, and job engagement by PLS-SEM. The descriptive method was applied. The sample consisted of (382) mental health service providers in Saudi Arabia, including 178 men and 204 women between 25 and 50 (36.32 ± 6.43) years old. The Artificial Intelligence Questionnaire, the Therapeutic Alliance Scale, and the Job Engagement Scale were applied in this study. The results showed the structural model's predictability for using AI and the therapeutic alliance in predicting job engagement and explaining the causal relationships between them compared to the indicator average and linear models. The study also found a strong positive overall statistically significant effect (p < 0.05) of the use of AI on therapeutic alliance (0.941) and job engagement (0.930) and a positive overall average statistically significant effect (p < 0.05) of the therapeutic alliance on job engagement (0.694). These findings indicated the importance of integrating AI applications and therapeutic alliance skills into training and professional development plans.
Collapse
Affiliation(s)
- Boshra A. Arnout
- Department of Psychology, College of Education, King Khalid University, P.O. Box 2380, Abha 62521, Saudi Arabia
- Department of Psychology, College of Arts, Zagazig University, Zagazig 44511, Egypt
| | - Sami M. Alshehri
- Department of Learning and Instructor, College of Education, King Khalid University, P.O. Box 8685, Abha 61492, Saudi Arabia;
| |
Collapse
|
31
|
Alfaraj A, Nagai T, AlQallaf H, Lin WS. Race to the Moon or the Bottom? Applications, Performance, and Ethical Considerations of Artificial Intelligence in Prosthodontics and Implant Dentistry. Dent J (Basel) 2024; 13:13. [PMID: 39851589 PMCID: PMC11763855 DOI: 10.3390/dj13010013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2024] [Revised: 12/09/2024] [Accepted: 12/24/2024] [Indexed: 01/26/2025] Open
Abstract
Objectives: This review aims to explore the applications of artificial intelligence (AI) in prosthodontics and implant dentistry, focusing on its performance outcomes and associated ethical concerns. Materials and Methods: Following the PRISMA guidelines, a search was conducted across databases such as PubMed, Medline, Web of Science, and Scopus. Studies published between January 2022 and May 2024, in English, were considered. The Population (P) included patients or extracted teeth with AI applications in prosthodontics and implant dentistry; the Intervention (I) was AI-based tools; the Comparison (C) was traditional methods, and the Outcome (O) involved AI performance outcomes and ethical considerations. The Newcastle-Ottawa Scale was used to assess the quality and risk of bias in the studies. Results: Out of 3420 initially identified articles, 18 met the inclusion criteria for AI applications in prosthodontics and implant dentistry. The review highlighted AI's significant role in improving diagnostic accuracy, treatment planning, and prosthesis design. AI models demonstrated high accuracy in classifying dental implants and predicting implant outcomes, although limitations were noted in data diversity and model generalizability. Regarding ethical issues, five studies identified concerns such as data privacy, system bias, and the potential replacement of human roles by AI. While patients generally viewed AI positively, dental professionals expressed hesitancy due to a lack of familiarity and regulatory guidelines, highlighting the need for better education and ethical frameworks. Conclusions: AI has the potential to revolutionize prosthodontics and implant dentistry by enhancing treatment accuracy and efficiency. However, there is a pressing need to address ethical issues through comprehensive training and the development of regulatory frameworks. Future research should focus on broadening AI applications and addressing the identified ethical concerns.
Collapse
Affiliation(s)
- Amal Alfaraj
- Department of Prosthodontics and Dental Implantology, College of Dentistry, King Faisal University, Al Ahsa 31982, Saudi Arabia;
- Department of Prosthodontics, Indiana University School of Dentistry, Indianapolis, IN 46202, USA;
| | - Toshiki Nagai
- Department of Prosthodontics, Indiana University School of Dentistry, Indianapolis, IN 46202, USA;
| | - Hawra AlQallaf
- Department of Periodontology, Indiana University School of Dentistry, Indianapolis, IN 46202, USA;
| | - Wei-Shao Lin
- Department of Prosthodontics, Indiana University School of Dentistry, Indianapolis, IN 46202, USA;
| |
Collapse
|
32
|
Rasa AR. Artificial Intelligence and Its Revolutionary Role in Physical and Mental Rehabilitation: A Review of Recent Advancements. BIOMED RESEARCH INTERNATIONAL 2024; 2024:9554590. [PMID: 39720127 PMCID: PMC11668540 DOI: 10.1155/bmri/9554590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 10/23/2024] [Accepted: 12/05/2024] [Indexed: 12/26/2024]
Abstract
The integration of artificial intelligence (AI) technologies into physical and mental rehabilitation has the potential to significantly transform these fields. AI innovations, including machine learning algorithms, natural language processing, and computer vision, offer occupational therapists advanced tools to improve care quality. These technologies facilitate more precise assessments, the development of tailored intervention plans, more efficient treatment delivery, and enhanced outcome evaluation. This review explores the integration of AI across various aspects of rehabilitation, providing a thorough examination of recent advancements and current applications. It highlights how AI applications, such as natural language processing, computer vision, virtual reality, machine learning, and robotics, are shaping the future of physical and mental recovery in occupational therapy.
Collapse
Affiliation(s)
- Amir Rahmani Rasa
- Department of Occupational Therapy, School of Rehabilitation Sciences, Hamadan University of Medical Sciences, Hamadan, Iran
| |
Collapse
|
33
|
Shojaei F, Shojaei F, Osorio Torres J, Shih PC. Insights From Art Therapists on Using AI-Generated Art in Art Therapy: Mixed Methods Study. JMIR Form Res 2024; 8:e63038. [PMID: 39631077 PMCID: PMC11634044 DOI: 10.2196/63038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Revised: 10/20/2024] [Accepted: 10/21/2024] [Indexed: 10/25/2024] Open
Abstract
Background With the increasing integration of artificial intelligence (AI) into various aspects of daily life, there is a growing interest among designers and practitioners in incorporating AI into their fields. In health care domains like art therapy, AI is also becoming a subject of exploration. However, the use of AI in art therapy is still undergoing investigation, with its benefits and challenges being actively explored. Objective This study aims to investigate the integration of AI into art therapy practices to comprehend its potential impact on therapeutic processes and outcomes. Specifically, the focus is on understanding the perspectives of art therapists regarding the use of AI-assisted tools in their practice with clients, as demonstrated through the presentation of our prototype consisting of a deck of cards with words covering various categories alongside an AI-generated image. Methods Using a co-design approach, 10 art therapists affiliated with the American Art Therapy Association participated in this study. They engaged in individual interviews where they discussed their professional perspectives on integrating AI into their therapeutic approaches and evaluating the prototype. Qualitative analysis was conducted to derive themes and insights from these sessions. Results The study began in August 2023, with data collection involving 10 participants taking place in October 2023. Our qualitative findings provide a comprehensive evaluation of the impact of AI on facilitating therapeutic processes. The combination of a deck of cards and the use of an AI-generated tool demonstrated an enhancement in the quality and accessibility of therapy sessions. However, challenges such as credibility and privacy concerns were also identified. Conclusions The integration of AI into art therapy presents promising avenues for innovation and progress within the field. By gaining insights into the perspectives and experiences of art therapists, this study contributes knowledge for both practical application and further research.
Collapse
Affiliation(s)
- Fereshtehossadat Shojaei
- Luddy School of Informatics, Computing, and Engineering, Indiana University Bloomington, 700 N Woodlawn Ave, Bloomington, IN, 47408, United States, 1 8128565754
| | - Fatemehalsadat Shojaei
- School of Computer Science, State University of New York, Oswego, NY, United States
- Center for Health Innovation and Implementation Science, School of Medicine, Indiana University, Indianapolis, IN, United States
| | - John Osorio Torres
- Luddy School of Informatics, Computing, and Engineering, Indiana University Bloomington, 700 N Woodlawn Ave, Bloomington, IN, 47408, United States, 1 8128565754
| | - Patrick C Shih
- Luddy School of Informatics, Computing, and Engineering, Indiana University Bloomington, 700 N Woodlawn Ave, Bloomington, IN, 47408, United States, 1 8128565754
| |
Collapse
|
34
|
Varghese MA, Sharma P, Patwardhan M. Public Perception on Artificial Intelligence-Driven Mental Health Interventions: Survey Research. JMIR Form Res 2024; 8:e64380. [PMID: 39607994 PMCID: PMC11638687 DOI: 10.2196/64380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2024] [Revised: 09/09/2024] [Accepted: 09/30/2024] [Indexed: 11/30/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) has become increasingly important in health care, generating both curiosity and concern. With a doctor-patient ratio of 1:834 in India, AI has the potential to alleviate a significant health care burden. Public perception plays a crucial role in shaping attitudes that can facilitate the adoption of new technologies. Similarly, the acceptance of AI-driven mental health interventions is crucial in determining their effectiveness and widespread adoption. Therefore, it is essential to study public perceptions and usage of existing AI-driven mental health interventions by exploring user experiences and opinions on their future applicability, particularly in comparison to traditional, human-based interventions. OBJECTIVE This study aims to explore the use, perception, and acceptance of AI-driven mental health interventions in comparison to traditional, human-based interventions. METHODS A total of 466 adult participants from India voluntarily completed a 30-item web-based survey on the use and perception of AI-based mental health interventions between November and December 2023. RESULTS Of the 466 respondents, only 163 (35%) had ever consulted a mental health professional. Additionally, 305 (65.5%) reported very low knowledge of AI-driven interventions. In terms of trust, 247 (53%) expressed a moderate level of Trust in AI-Driven Mental Health Interventions, while only 24 (5.2%) reported a high level of trust. By contrast, 114 (24.5%) reported high trust and 309 (66.3%) reported moderate Trust in Human-Based Mental Health Interventions; 242 (51.9%) participants reported a high level of stigma associated with using human-based interventions, compared with only 50 (10.7%) who expressed concerns about stigma related to AI-driven interventions. Additionally, 162 (34.8%) expressed a positive outlook toward the future use and social acceptance of AI-based interventions. The majority of respondents indicated that AI could be a useful option for providing general mental health tips and conducting initial assessments. The key benefits of AI highlighted by participants were accessibility, cost-effectiveness, 24/7 availability, and reduced stigma. Major concerns included data privacy, security, the lack of human touch, and the potential for misdiagnosis. CONCLUSIONS There is a general lack of awareness about AI-driven mental health interventions. However, AI shows potential as a viable option for prevention, primary assessment, and ongoing mental health maintenance. Currently, people tend to trust traditional mental health practices more. Stigma remains a significant barrier to accessing traditional mental health services. Currently, the human touch remains an indispensable aspect of human-based mental health care, one that AI cannot replace. However, integrating AI with human mental health professionals is seen as a compelling model. AI is positively perceived in terms of accessibility, availability, and destigmatization. Knowledge and perceived trustworthiness are key factors influencing the acceptance and effectiveness of AI-driven mental health interventions.
Collapse
Affiliation(s)
- Mahima Anna Varghese
- Department of Social Science and Language, Vellore Institute of Technology, Vellore, India
| | - Poonam Sharma
- Department of Social Science and Language, Vellore Institute of Technology, Vellore, India
| | | |
Collapse
|
35
|
Badawy W, Zinhom H, Shaban M. Navigating ethical considerations in the use of artificial intelligence for patient care: A systematic review. Int Nurs Rev 2024. [PMID: 39545614 DOI: 10.1111/inr.13059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Accepted: 10/19/2024] [Indexed: 11/17/2024]
Abstract
AIM To explore the ethical considerations and challenges faced by nursing professionals in integrating artificial intelligence (AI) into patient care. BACKGROUND AI's integration into nursing practice enhances clinical decision-making and operational efficiency but raises ethical concerns regarding privacy, accountability, informed consent, and the preservation of human-centered care. METHODS A systematic review was conducted, following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Thirteen studies were selected from databases including PubMed, Embase, IEEE Xplore, PsycINFO, and CINAHL. Thematic analysis identified key ethical themes related to AI use in nursing. RESULTS The review highlighted critical ethical challenges, such as data privacy and security, accountability for AI-driven decisions, transparency in AI decision-making, and maintaining the human touch in care. The findings underscore the importance of stakeholder engagement, continuous education for nurses, and robust governance frameworks to guide ethical AI implementation in nursing. DISCUSSION The results align with existing literature on AI's ethical complexities in healthcare. Addressing these challenges requires strengthening nursing competencies in AI, advocating for patient-centered AI design, and ensuring that AI integration upholds ethical standards. CONCLUSION Although AI offers significant benefits for nursing practice, it also introduces ethical challenges that must be carefully managed. Enhancing nursing education, promoting stakeholder engagement, and developing comprehensive policies are essential for ethically integrating AI into nursing. IMPLICATIONS FOR NURSING AI can improve clinical decision-making and efficiency, but nurses must actively preserve humanistic care aspects through ongoing education and involvement in AI governance. IMPLICATIONS FOR HEALTH POLICY Establish ethical frameworks and data protection policies tailored to AI in nursing. Support continuous professional development and allocate resources for the ethical integration of AI in healthcare.
Collapse
Affiliation(s)
- Walaa Badawy
- Department of Psychology, College of Education, King Khaled University, Abha, Saudi Arabia
| | - Haithm Zinhom
- Mohammed Bin Zayed University for Humanities, Abu Dhabi, UAE
| | - Mostafa Shaban
- Community Health Nursing Department, College of Nursing, Jouf University, Sakaka, Saudi Arabia
| |
Collapse
|
36
|
Basu B, Dutta S, Rahaman M, Bose A, Das S, Prajapati J, Prajapati B. The Future of Cystic Fibrosis Care: Exploring AI's Impact on Detection and Therapy. CURRENT RESPIRATORY MEDICINE REVIEWS 2024; 20:302-321. [DOI: 10.2174/011573398x283365240208195944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Revised: 01/08/2024] [Accepted: 01/18/2024] [Indexed: 01/03/2025]
Abstract
:
Cystic Fibrosis (CF) is a fatal hereditary condition marked by thicker mucus production,
which can cause problems with the digestive and respiratory systems. The quality of life and
survival rates of CF patients can be improved by early identification and individualized therapy
measures. With an emphasis on its applications in diagnosis and therapy, this paper investigates
how Artificial Intelligence (AI) is transforming the management of Cystic Fibrosis (CF). AI-powered
algorithms are revolutionizing CF diagnosis by utilizing huge genetic, clinical, and imaging
data databases. In order to identify CF mutations quickly and precisely, machine learning methods
evaluate genomic profiles. Furthermore, AI-driven imaging analysis helps to identify lung and gastrointestinal
issues linked to cystic fibrosis early and allows for prompt treatment. Additionally,
AI aids in individualized CF therapy by anticipating how patients will react to already available
medications and enabling customized treatment regimens. Drug repurposing algorithms find
prospective candidates from already-approved drugs, advancing treatment choices. Additionally,
AI supports the optimization of pharmacological combinations, enhancing therapeutic results
while minimizing side effects. AI also helps with patient stratification by connecting people with
CF mutations to therapies that are best for their genetic profiles. Improved treatment effectiveness
is promised by this tailored strategy. The transformational potential of artificial intelligence (AI)
in the field of cystic fibrosis is highlighted in this review, from early identification to individualized
medication, bringing hope for better patient outcomes, and eventually prolonging the lives of
people with this difficult ailment.
Collapse
Affiliation(s)
- Biswajit Basu
- Department of Pharmaceutical Technology, School of Health and Medical Sciences, Adamas University, Barasat,
Kolkata, West Bengal, 700126. India
| | - Srabona Dutta
- Department of Pharmaceutical Technology, School of Health and Medical Sciences, Adamas University, Barasat,
Kolkata, West Bengal, 700126. India
| | - Monosiz Rahaman
- Department of Pharmaceutical Technology, School of Health and Medical Sciences, Adamas University, Barasat,
Kolkata, West Bengal, 700126. India
| | - Anirbandeep Bose
- Department of Pharmaceutical Technology, School of Health and Medical Sciences, Adamas University, Barasat,
Kolkata, West Bengal, 700126. India
| | - Sourav Das
- School of Pharmacy, The Neotia University, Sarisha, Diamond Harbour, West
Bengal, India
| | - Jigna Prajapati
- Achaya Motibhai Patel Institute of Computer Studies, Ganpat University, Mehsana, Gujarat, 384012,
India
| | - Bhupendra Prajapati
- S.K. Patel College of Pharmaceutical Education and Research, Ganpat University, Mehsana, Gujarat, 384012,
India
| |
Collapse
|
37
|
Lee GC, Platow MJ, Cruwys T. Listening quality leads to greater working alliance and well-being: Testing a social identity model of working alliance. BRITISH JOURNAL OF CLINICAL PSYCHOLOGY 2024; 63:573-588. [PMID: 38946045 DOI: 10.1111/bjc.12489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Accepted: 06/19/2024] [Indexed: 07/02/2024]
Abstract
OBJECTIVES Characterization of psychotherapy as the "talking cure" de-emphasizes the importance of an active listener on the curative effect of talking. We test whether the working alliance and its benefits emerge from expression of voice, per se, or whether active listening is needed. We examine the role of listening in a social identity model of working alliance. METHODS University student participants in a laboratory experiment spoke about stress management to another person (a confederate student) who either did or did not engage in active listening. Participants reported their perceptions of alliance, key social-psychological variables, and well-being. RESULTS Active listening led to significantly higher ratings of alliance, procedural justice, social identification, and identity leadership, compared to no active listening. Active listening also led to greater positive affect and satisfaction. Ultimately, an explanatory path model was supported in which active listening predicted working alliance through social identification, identity leadership, and procedural justice. CONCLUSIONS Listening quality enhances alliance and well-being in a manner consistent with a social identity model of working alliance, and is a strategy for facilitating alliance in therapy.
Collapse
Affiliation(s)
- Georgina C Lee
- School of Medicine and Psychology, The Australian National University, Canberra, Australian Capital Territory, Australia
| | - Michael J Platow
- School of Medicine and Psychology, The Australian National University, Canberra, Australian Capital Territory, Australia
| | - Tegan Cruwys
- School of Medicine and Psychology, The Australian National University, Canberra, Australian Capital Territory, Australia
| |
Collapse
|
38
|
van Houtum LAEM, Baaré WFC, Beckmann CF, Castro-Fornieles J, Cecil CAM, Dittrich J, Ebdrup BH, Fegert JM, Havdahl A, Hillegers MHJ, Kalisch R, Kushner SA, Mansuy IM, Mežinska S, Moreno C, Muetzel RL, Neumann A, Nordentoft M, Pingault JB, Preisig M, Raballo A, Saunders J, Sprooten E, Sugranyes G, Tiemeier H, van Woerden GM, Vandeleur CL, van Haren NEM. Running in the FAMILY: understanding and predicting the intergenerational transmission of mental illness. Eur Child Adolesc Psychiatry 2024; 33:3885-3898. [PMID: 38613677 PMCID: PMC11588957 DOI: 10.1007/s00787-024-02423-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 03/15/2024] [Indexed: 04/15/2024]
Abstract
Over 50% of children with a parent with severe mental illness will develop mental illness by early adulthood. However, intergenerational transmission of risk for mental illness in one's children is insufficiently considered in clinical practice, nor is it sufficiently utilised into diagnostics and care for children of ill parents. This leads to delays in diagnosing young offspring and missed opportunities for protective actions and resilience strengthening. Prior twin, family, and adoption studies suggest that the aetiology of mental illness is governed by a complex interplay of genetic and environmental factors, potentially mediated by changes in epigenetic programming and brain development. However, how these factors ultimately materialise into mental disorders remains unclear. Here, we present the FAMILY consortium, an interdisciplinary, multimodal (e.g., (epi)genetics, neuroimaging, environment, behaviour), multilevel (e.g., individual-level, family-level), and multisite study funded by a European Union Horizon-Staying-Healthy-2021 grant. FAMILY focuses on understanding and prediction of intergenerational transmission of mental illness, using genetically informed causal inference, multimodal normative prediction, and animal modelling. Moreover, FAMILY applies methods from social sciences to map social and ethical consequences of risk prediction to prepare clinical practice for future implementation. FAMILY aims to deliver: (i) new discoveries clarifying the aetiology of mental illness and the process of resilience, thereby providing new targets for prevention and intervention studies; (ii) a risk prediction model within a normative modelling framework to predict who is at risk for developing mental illness; and (iii) insight into social and ethical issues related to risk prediction to inform clinical guidelines.
Collapse
Affiliation(s)
- Lisanne A E M van Houtum
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Centre-Sophia, Rotterdam, The Netherlands
| | - William F C Baaré
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital-Amager and Hvidovre, Copenhagen, Denmark
| | - Christian F Beckmann
- Centre for Functional MRI of the Brain, Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
- Department of Cognitive Neuroscience, Radboud University Medical Centre, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, the Netherlands
| | - Josefina Castro-Fornieles
- Department of Child and Adolescent Psychiatry and Psychology, 2021SGR01319, Institut Clinic de Neurociències, Hospital Clínic de Barcelona, FCRB-IDIBAPS, Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Department of Medicine, Institute of Neuroscience, University of Barcelona, Barcelona, Spain
| | - Charlotte A M Cecil
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Centre-Sophia, Rotterdam, The Netherlands
- Department of Epidemiology, Erasmus MC, University Medical Centre Rotterdam, Rotterdam, the Netherlands
| | | | - Bjørn H Ebdrup
- Center for Neuropsychiatric Schizophrenia Research and Centre for Clinical Intervention and Neuropsychiatric Schizophrenia Research, Mental Health Centre Glostrup, University of Copenhagen, Glostrup, Denmark
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Jörg M Fegert
- President European Society for Child and Adolescent Psychiatry (ESCAP), Brussels, Belgium
- Department of Child and Adolescent Psychiatry/Psychotherapy, University Hospital Ulm, Ulm, Germany
| | - Alexandra Havdahl
- PsychGen Centre for Genetic Epidemiology and Mental Health, Norwegian Institute of Public Health, Oslo, Norway
- PROMENTA Research Centre, Department of Psychology, University of Oslo, Oslo, Norway
- Nic Waals Institute, Lovisenberg Diaconal Hospital, Oslo, Norway
| | - Manon H J Hillegers
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Centre-Sophia, Rotterdam, The Netherlands
| | - Raffael Kalisch
- Leibniz Institute for Resilience Research, Mainz, Germany
- Neuroimaging Center (NIC), Focus Program Translational Neuroscience (FTN), Johannes Gutenberg University Medical Center, Mainz, Germany
| | - Steven A Kushner
- Department of Psychiatry, Erasmus MC, University Medical Centre Rotterdam, Rotterdam, The Netherlands
| | - Isabelle M Mansuy
- Laboratory of Neuroepigenetics, Medical Faculty, Brain Research Institute, Department of Health Science and Technology of ETH, University of Zurich and Institute for Neuroscience, Zurich, Switzerland
- Zurich Neuroscience Centre, ETH and University of Zurich, Zurich, Switzerland
| | - Signe Mežinska
- Institute of Clinical and Preventive Medicine, University of Latvia, Riga, Latvia
| | - Carmen Moreno
- Department of Child and Adolescent Psychiatry, Institute of Psychiatry and Mental Health, Hospital General Universitario Gregorio Marañón, IiSGM, CIBERSAM, ISCIII, School of Medicine, Universidad Complutense, Madrid, Spain
| | - Ryan L Muetzel
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Centre-Sophia, Rotterdam, The Netherlands
- Department of Radiology and Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | - Alexander Neumann
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Centre-Sophia, Rotterdam, The Netherlands
| | - Merete Nordentoft
- The Lundbeck Foundation Initiative for Integrative Psychiatric Research, Aarhus, Denmark
- Copenhagen Research Centre for Mental Health, Mental Health Centre Copenhagen, Copenhagen University Hospital, Copenhagen, Denmark
| | - Jean-Baptiste Pingault
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Centre-Sophia, Rotterdam, The Netherlands
- Social, Genetic and Developmental Psychiatry Centre, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
- Department of Clinical, Educational and Health Psychology, University College London, London, UK
| | - Martin Preisig
- Psychiatric Epidemiology and Psychopathology Research Centre, Department of Psychiatry, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Andrea Raballo
- Public Health Division, Department of Health and Social Care, Cantonal Socio-Psychiatric Organization, Repubblica e Cantone Ticino, Mendrisio, Switzerland
- Chair of Psychiatry, Faculty of Biomedical Sciences, Università Della Svizzera Italiana, Lugano, Switzerland
| | - John Saunders
- Executive Director European Federation of Associations of Families of People with Mental Illness (EUFAMI), Louvain, Belgium
| | - Emma Sprooten
- Department of Cognitive Neuroscience, Radboud University Medical Centre, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, the Netherlands
- Department of Human Genetics, Radboud University Medical Centre, Nijmegen, The Netherlands
| | - Gisela Sugranyes
- Department of Child and Adolescent Psychiatry and Psychology, 2021SGR01319, Institut Clinic de Neurociències, Hospital Clínic de Barcelona, FCRB-IDIBAPS, Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Department of Medicine, Institute of Neuroscience, University of Barcelona, Barcelona, Spain
| | - Henning Tiemeier
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Centre-Sophia, Rotterdam, The Netherlands
- Department of Social and Behavioural Sciences, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Geeske M van Woerden
- Department of Neuroscience, Erasmus University Medical Centre, Rotterdam, The Netherlands
- ENCORE Expertise Center for Neurodevelopmental Disorders, Erasmus University Medical Centre, Rotterdam, The Netherlands
- Department of Clinical Genetics, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | - Caroline L Vandeleur
- Psychiatric Epidemiology and Psychopathology Research Centre, Department of Psychiatry, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Neeltje E M van Haren
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Centre-Sophia, Rotterdam, The Netherlands.
| |
Collapse
|
39
|
Elyoseph Z, Gur T, Haber Y, Simon T, Angert T, Navon Y, Tal A, Asman O. An Ethical Perspective on the Democratization of Mental Health With Generative AI. JMIR Ment Health 2024; 11:e58011. [PMID: 39417792 PMCID: PMC11500620 DOI: 10.2196/58011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Revised: 07/20/2024] [Accepted: 07/24/2024] [Indexed: 10/19/2024] Open
Abstract
Unlabelled Knowledge has become more open and accessible to a large audience with the "democratization of information" facilitated by technology. This paper provides a sociohistorical perspective for the theme issue "Responsible Design, Integration, and Use of Generative AI in Mental Health." It evaluates ethical considerations in using generative artificial intelligence (GenAI) for the democratization of mental health knowledge and practice. It explores the historical context of democratizing information, transitioning from restricted access to widespread availability due to the internet, open-source movements, and most recently, GenAI technologies such as large language models. The paper highlights why GenAI technologies represent a new phase in the democratization movement, offering unparalleled access to highly advanced technology as well as information. In the realm of mental health, this requires delicate and nuanced ethical deliberation. Including GenAI in mental health may allow, among other things, improved accessibility to mental health care, personalized responses, and conceptual flexibility, and could facilitate a flattening of traditional hierarchies between health care providers and patients. At the same time, it also entails significant risks and challenges that must be carefully addressed. To navigate these complexities, the paper proposes a strategic questionnaire for assessing artificial intelligence-based mental health applications. This tool evaluates both the benefits and the risks, emphasizing the need for a balanced and ethical approach to GenAI integration in mental health. The paper calls for a cautious yet positive approach to GenAI in mental health, advocating for the active engagement of mental health professionals in guiding GenAI development. It emphasizes the importance of ensuring that GenAI advancements are not only technologically sound but also ethically grounded and patient-centered.
Collapse
Affiliation(s)
- Zohar Elyoseph
- Department of Brain Sciences, Faculty of Medicine, Imperial College, Fulham Palace Rd, London, W6 8RF, United Kingdom, 44 547836088
- Faculty of Education, University of Haifa, Haifa, Israel
| | - Tamar Gur
- The Adelson School of Entrepreneurship, Reichman University, Herzliya, Israel
| | - Yuval Haber
- The PhD Program of Hermeneutics & Cultural Studies, Bar-Ilan University, Ramat Gan, Israel
| | - Tomer Simon
- Microsoft Israel R&D Center, Tel Aviv, Israel
| | - Tal Angert
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Yuval Navon
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Amir Tal
- Samueli Initiative for Responsible AI in Medicine, Faculty of Medical and Health Sciences, Tel Aviv University, Tel Aviv, Israel
| | - Oren Asman
- Samueli Initiative for Responsible AI in Medicine, Faculty of Medical and Health Sciences, Tel Aviv University, Tel Aviv, Israel
- Department of Nursing, Faculty of Medical and Health Sciences, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
40
|
Das KP, Gavade P. A review on the efficacy of artificial intelligence for managing anxiety disorders. Front Artif Intell 2024; 7:1435895. [PMID: 39479229 PMCID: PMC11523650 DOI: 10.3389/frai.2024.1435895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 09/16/2024] [Indexed: 11/02/2024] Open
Abstract
Anxiety disorders are psychiatric conditions characterized by prolonged and generalized anxiety experienced by individuals in response to various events or situations. At present, anxiety disorders are regarded as the most widespread psychiatric disorders globally. Medication and different types of psychotherapies are employed as the primary therapeutic modalities in clinical practice for the treatment of anxiety disorders. However, combining these two approaches is known to yield more significant benefits than medication alone. Nevertheless, there is a lack of resources and a limited availability of psychotherapy options in underdeveloped areas. Psychotherapy methods encompass relaxation techniques, controlled breathing exercises, visualization exercises, controlled exposure exercises, and cognitive interventions such as challenging negative thoughts. These methods are vital in the treatment of anxiety disorders, but executing them proficiently can be demanding. Moreover, individuals with distinct anxiety disorders are prescribed medications that may cause withdrawal symptoms in some instances. Additionally, there is inadequate availability of face-to-face psychotherapy and a restricted capacity to predict and monitor the health, behavioral, and environmental aspects of individuals with anxiety disorders during the initial phases. In recent years, there has been notable progress in developing and utilizing artificial intelligence (AI) based applications and environments to improve the precision and sensitivity of diagnosing and treating various categories of anxiety disorders. As a result, this study aims to establish the efficacy of AI-enabled environments in addressing the existing challenges in managing anxiety disorders, reducing reliance on medication, and investigating the potential advantages, issues, and opportunities of integrating AI-assisted healthcare for anxiety disorders and enabling personalized therapy.
Collapse
Affiliation(s)
- K. P. Das
- Department of Computer Science, Christ University, Bengaluru, India
| | - P. Gavade
- Independent Practitioner, San Francisco, CA, United States
| |
Collapse
|
41
|
Stavropoulos A, Crone DL, Grossmann I. Shadows of wisdom: Classifying meta-cognitive and morally grounded narrative content via large language models. Behav Res Methods 2024; 56:7632-7646. [PMID: 38811519 DOI: 10.3758/s13428-024-02441-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/14/2024] [Indexed: 05/31/2024]
Abstract
We investigated large language models' (LLMs) efficacy in classifying complex psychological constructs like intellectual humility, perspective-taking, open-mindedness, and search for a compromise in narratives of 347 Canadian and American adults reflecting on a workplace conflict. Using state-of-the-art models like GPT-4 across few-shot and zero-shot paradigms and RoB-ELoC (RoBERTa -fine-tuned-on-Emotion-with-Logistic-Regression-Classifier), we compared their performance with expert human coders. Results showed robust classification by LLMs, with over 80% agreement and F1 scores above 0.85, and high human-model reliability (Cohen's κ Md across top models = .80). RoB-ELoC and few-shot GPT-4 were standout classifiers, although somewhat less effective in categorizing intellectual humility. We offer example workflows for easy integration into research. Our proof-of-concept findings indicate the viability of both open-source and commercial LLMs in automating the coding of complex constructs, potentially transforming social science research.
Collapse
Affiliation(s)
| | | | - Igor Grossmann
- Department of Psychology, University of Waterloo, Waterloo, N2L 3G1, Canada.
| |
Collapse
|
42
|
Galderisi S, Appelbaum PS, Gill N, Gooding P, Herrman H, Melillo A, Myrick K, Pathare S, Savage M, Szmukler G, Torous J. Ethical challenges in contemporary psychiatry: an overview and an appraisal of possible strategies and research needs. World Psychiatry 2024; 23:364-386. [PMID: 39279422 PMCID: PMC11403198 DOI: 10.1002/wps.21230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/18/2024] Open
Abstract
Psychiatry shares most ethical issues with other branches of medicine, but also faces special challenges. The Code of Ethics of the World Psychiatric Association offers guidance, but many mental health care professionals are unaware of it and the principles it supports. Furthermore, following codes of ethics is not always sufficient to address ethical dilemmas arising from possible clashes among their principles, and from continuing changes in knowledge, culture, attitudes, and socio-economic context. In this paper, we identify topics that pose difficult ethical challenges in contemporary psychiatry; that may have a significant impact on clinical practice, education and research activities; and that may require revision of the profession's codes of ethics. These include: the relationships between human rights and mental health care, research and training; human rights and mental health legislation; digital psychiatry; early intervention in psychiatry; end-of-life decisions by people with mental health conditions; conflicts of interests in clinical practice, training and research; and the role of people with lived experience and family/informal supporters in shaping the agenda of mental health care, policy, research and training. For each topic, we highlight the ethical concerns, suggest strategies to address them, call attention to the risks that these strategies entail, and highlight the gaps to be narrowed by further research. We conclude that, in order to effectively address current ethical challenges in psychiatry, we need to rethink policies, services, training, attitudes, research methods and codes of ethics, with the concurrent input of a range of stakeholders, open minded discussions, new models of care, and an adequate organizational capacity to roll-out the implementation across routine clinical care contexts, training and research.
Collapse
Affiliation(s)
| | - Paul S Appelbaum
- Columbia University and New York State Psychiatric Institute, New York, NY, USA
| | - Neeraj Gill
- School of Medicine and Dentistry, Griffith University, Gold Coast, Brisbane, QLD, Australia
- Mental Health Policy Unit, Health Research Institute, University of Canberra, Canberra, NSW, Australia
- Mental Health and Specialist Services, Gold Coast Health, Southport, QLD, Australia
| | - Piers Gooding
- La Trobe Law School, La Trobe University, Melbourne, VIC, Australia
| | - Helen Herrman
- Orygen, Parkville, VIC, Australia
- University of Melbourne, Parkville, VIC, Australia
| | | | - Keris Myrick
- Division of Digital Psychiatry, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Soumitra Pathare
- Centre for Mental Health Law and Policy, Indian Law Society, Pune, India
| | - Martha Savage
- Victoria University of Wellington, School of Geography, Environment and Earth Sciences, Wellington, New Zealand
| | - George Szmukler
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| | - John Torous
- Department of Psychiatry, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
43
|
Singh S, Gambill JL, Attalla M, Fatima R, Gill AR, Siddiqui HF. Evaluating the Clinical Validity and Reliability of Artificial Intelligence-Enabled Diagnostic Tools in Neuropsychiatric Disorders. Cureus 2024; 16:e71651. [PMID: 39553014 PMCID: PMC11567685 DOI: 10.7759/cureus.71651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/16/2024] [Indexed: 11/19/2024] Open
Abstract
Neuropsychiatric disorders (NPDs) pose a substantial burden on the healthcare system. The major challenge in diagnosing NPDs is the subjective assessment by the physician which can lead to inaccurate and delayed diagnosis. Recent studies have depicted that the integration of artificial intelligence (AI) in neuropsychiatry could potentially revolutionize the field by precisely diagnosing complex neurological and mental health disorders in a timely fashion and providing individualized management strategies. In this narrative review, the authors have examined the current status of AI tools in assessing neuropsychiatric disorders and evaluated their validity and reliability in the existing literature. The analysis of various datasets including MRI scans, EEG, facial expressions, social media posts, texts, and laboratory samples in the accurate diagnosis of neuropsychiatric conditions using machine learning has been profoundly explored in this article. The recent trials and tribulations in various neuropsychiatric disorders encouraging future scope in the utility and application of AI have been discussed. Overall machine learning has proved to be feasible and applicable in the field of neuropsychiatry and it is about time that research translates to clinical settings for favorable patient outcomes. Future trials should focus on presenting higher quality evidence for superior adaptability and establish guidelines for healthcare providers to maintain standards.
Collapse
Affiliation(s)
- Satneet Singh
- Psychiatry, Hampshire and Isle of Wight Healthcare NHS Foundation Trust, Southampton, GBR
| | | | - Mary Attalla
- Medicine, Saba University School of Medicine, The Bottom, NLD
| | - Rida Fatima
- Mental Health, Cwm Taf Morgannwg University Health Board, Pontyclun, GBR
| | - Amna R Gill
- Psychiatry, HSE (Health Service Executive) Ireland, Dublin, IRL
| | - Humza F Siddiqui
- Internal Medicine, Jinnah Postgraduate Medical Centre, Karachi, PAK
| |
Collapse
|
44
|
Hoek S, Metselaar S, Ploem C, Bak M. Promising for patients or deeply disturbing? The ethical and legal aspects of deepfake therapy. JOURNAL OF MEDICAL ETHICS 2024:jme-2024-109985. [PMID: 38981659 DOI: 10.1136/jme-2024-109985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Accepted: 06/24/2024] [Indexed: 07/11/2024]
Abstract
Deepfakes are hyper-realistic but fabricated videos created with the use of artificial intelligence. In the context of psychotherapy, the first studies on using deepfake technology are emerging, with potential applications including grief counselling and treatment for sexual violence-related trauma. This paper explores these applications from the perspective of medical ethics and health law. First, we question whether deepfake therapy can truly constitute good care. Important risks are dangerous situations or 'triggers' to the patient during data collection for the creation of a deepfake, and when deepfake therapy is started, there are risks of overattachment and blurring of reality, which can complicate the grieving process or alter perceptions of perpetrators. Therapists must mitigate these risks, but more research is needed to evaluate deepfake therapy's efficacy before it can be used at all. Second, we address the implications for the person depicted in the deepfake. We describe how privacy and portrait law apply and argue that the legitimate interests of those receiving therapy should outweigh the interests of the depicted, as long as the therapy is an effective and 'last resort' treatment option, overseen by a therapist and the deepfakes are handled carefully. We suggest specific preventative measures that can be taken to protect the depicted person's privacy. Finally, we call for qualitative research with patients and therapists to explore dependencies and other unintended consequences. In conclusion, while deepfake therapy holds promise, the competing interests and ethicolegal complexities demand careful consideration and further investigation alongside the development and implementation of this technology.
Collapse
Affiliation(s)
- Saar Hoek
- Law Centre for Health and Life, Faculty of Law, University of Amsterdam, Amsterdam, Netherlands
| | - Suzanne Metselaar
- Department of Ethics, Law & Humanities, Amsterdam UMC, Amsterdam, Netherlands
| | - Corrette Ploem
- Law Centre for Health and Life, Faculty of Law, University of Amsterdam, Amsterdam, Netherlands
- Department of Ethics, Law & Humanities, Amsterdam UMC, Amsterdam, Netherlands
| | - Marieke Bak
- Department of Ethics, Law & Humanities, Amsterdam UMC, Amsterdam, Netherlands
- Institute for History and Ethics of Medicine, Technical University of Munich, Munchen, Germany
| |
Collapse
|
45
|
Wang N. The role of psychotherapy apps during teaching solo vocals: The specifics of students' psychological preparation for performing in front of an audience. Acta Psychol (Amst) 2024; 249:104417. [PMID: 39121613 DOI: 10.1016/j.actpsy.2024.104417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 07/08/2024] [Accepted: 07/17/2024] [Indexed: 08/12/2024] Open
Abstract
This study aimed to determine the effectiveness of a self-help application to reduce performance-related excitement in students of solo vocals in higher education institutions. The study participants (n = 219) used the mobile application during 6 weeks. Statistically significant effect of the intervention was achieved by Negative cognitions, Psychological vulnerability, and Anxiety perception constructs. The study also examines the influence of sociodemographic and personal characteristics on anxiety. Gender, graduate status, and self-efficacy were statistically significant variables when using the psychological self-help application. The investigation failed to disclose any significant impact of performance experience. Psychological self-help applications can be used in vocal/music education as a low-threshold intervention to reduce anxiety symptoms. The findings of the study introduce new data into approaches to the treatment of anxiety and expand the understanding of the characteristic features of singer training.
Collapse
Affiliation(s)
- Ning Wang
- College of Music and Dance, Henan Normal University, No. 46, Jianshe East Road, Xinxiang 453007, Henan Province, China.
| |
Collapse
|
46
|
Webb J. Machine learning, healthcare resource allocation, and patient consent. New Bioeth 2024; 30:206-227. [PMID: 39545564 DOI: 10.1080/20502877.2024.2416858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2024]
Abstract
The impact of machine learning in healthcare on patient informed consent is now the subject of significant inquiry in bioethics. However, the topic has predominantly been considered in the context of black box diagnostic or treatment recommendation algorithms. The impact of machine learning involved in healthcare resource allocation on patient consent remains undertheorized. This paper will establish where patient consent is relevant in healthcare resource allocation, before exploring the impact on informed consent from the introduction of black box machine learning into resource allocation. It will then consider the arguments for informing patients about the use of machine learning in resource allocation, before exploring the challenge of whether individual patients could principally contest algorithmic prioritization decisions involving black box machine learning. Finally, this paper will examine how different forms of opacity in machine learning involved in resource allocation could be a barrier to patient consent to clinical decision-making in different healthcare contexts.
Collapse
Affiliation(s)
- Jamie Webb
- Centre for Technomoral Futures, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
47
|
Sriharan A, Sekercioglu N, Mitchell C, Senkaiahliyan S, Hertelendy A, Porter T, Banaszak-Holl J. Leadership for AI Transformation in Health Care Organization: Scoping Review. J Med Internet Res 2024; 26:e54556. [PMID: 39009038 PMCID: PMC11358667 DOI: 10.2196/54556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 03/12/2024] [Accepted: 07/15/2024] [Indexed: 07/17/2024] Open
Abstract
BACKGROUND The leaders of health care organizations are grappling with rising expenses and surging demands for health services. In response, they are increasingly embracing artificial intelligence (AI) technologies to improve patient care delivery, alleviate operational burdens, and efficiently improve health care safety and quality. OBJECTIVE In this paper, we map the current literature and synthesize insights on the role of leadership in driving AI transformation within health care organizations. METHODS We conducted a comprehensive search across several databases, including MEDLINE (via Ovid), PsycINFO (via Ovid), CINAHL (via EBSCO), Business Source Premier (via EBSCO), and Canadian Business & Current Affairs (via ProQuest), spanning articles published from 2015 to June 2023 discussing AI transformation within the health care sector. Specifically, we focused on empirical studies with a particular emphasis on leadership. We used an inductive, thematic analysis approach to qualitatively map the evidence. The findings were reported in accordance with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews) guidelines. RESULTS A comprehensive review of 2813 unique abstracts led to the retrieval of 97 full-text articles, with 22 included for detailed assessment. Our literature mapping reveals that successful AI integration within healthcare organizations requires leadership engagement across technological, strategic, operational, and organizational domains. Leaders must demonstrate a blend of technical expertise, adaptive strategies, and strong interpersonal skills to navigate the dynamic healthcare landscape shaped by complex regulatory, technological, and organizational factors. CONCLUSIONS In conclusion, leading AI transformation in healthcare requires a multidimensional approach, with leadership across technological, strategic, operational, and organizational domains. Organizations should implement a comprehensive leadership development strategy, including targeted training and cross-functional collaboration, to equip leaders with the skills needed for AI integration. Additionally, when upskilling or recruiting AI talent, priority should be given to individuals with a strong mix of technical expertise, adaptive capacity, and interpersonal acumen, enabling them to navigate the unique complexities of the healthcare environment.
Collapse
Affiliation(s)
- Abi Sriharan
- Krembil Centre for Health Management and Leadership, Schulich School of Business, York University, Toronto, ON, Canada
- Institute for Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Nigar Sekercioglu
- Institute for Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Cheryl Mitchell
- Gustavson School of Business, University of Victoria, Victoria, ON, Canada
| | - Senthujan Senkaiahliyan
- Institute for Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Attila Hertelendy
- College of Business, Florida International University, Florida, FL, United States
| | - Tracy Porter
- Department of Management, Cleveland State University, Cleveland, OH, United States
| | - Jane Banaszak-Holl
- Department of Health Services Administration, School of Health Professions, University of Alabama Birmingham, Birmingham, OH, United States
| |
Collapse
|
48
|
Zhang K, Zhou HY, Baptista-Hon DT, Gao Y, Liu X, Oermann E, Xu S, Jin S, Zhang J, Sun Z, Yin Y, Razmi RM, Loupy A, Beck S, Qu J, Wu J, International Consortium of Digital Twins in Medicine. Concepts and applications of digital twins in healthcare and medicine. PATTERNS (NEW YORK, N.Y.) 2024; 5:101028. [PMID: 39233690 PMCID: PMC11368703 DOI: 10.1016/j.patter.2024.101028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/06/2024]
Abstract
The digital twin (DT) is a concept widely used in industry to create digital replicas of physical objects or systems. The dynamic, bi-directional link between the physical entity and its digital counterpart enables a real-time update of the digital entity. It can predict perturbations related to the physical object's function. The obvious applications of DTs in healthcare and medicine are extremely attractive prospects that have the potential to revolutionize patient diagnosis and treatment. However, challenges including technical obstacles, biological heterogeneity, and ethical considerations make it difficult to achieve the desired goal. Advances in multi-modal deep learning methods, embodied AI agents, and the metaverse may mitigate some difficulties. Here, we discuss the basic concepts underlying DTs, the requirements for implementing DTs in medicine, and their current and potential healthcare uses. We also provide our perspective on five hallmarks for a healthcare DT system to advance research in this field.
Collapse
Affiliation(s)
- Kang Zhang
- National Clinical Eye Research Center, Eye Hospital, Wenzhou Medical University, Wenzhou 325000, China
- Institute for Clinical Data Science, Wenzhou Medical University, Wenzhou 325000, China
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau 999078, China
- Institute for Advanced Study on Eye Health and Diseases, Wenzhou Medical University, Wenzhou 325000, China
| | - Hong-Yu Zhou
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA 02138, USA
| | - Daniel T. Baptista-Hon
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau 999078, China
- School of Medicine, University of Dundee, DD1 9SY Dundee, UK
| | - Yuanxu Gao
- Department of Big Data and Biomedical AI, College of Future Technology, Peking University, Beijing 100000, China
| | - Xiaohong Liu
- Cancer Institute, University College London, WC1E 6BT London, UK
| | - Eric Oermann
- NYU Langone Medical Center, New York University, New York, NY 10016, USA
| | - Sheng Xu
- Department of Chemical Engineering and Nanoengineering, University of California San Diego, San Diego, CA 92093, USA
| | - Shengwei Jin
- Institute for Clinical Data Science, Wenzhou Medical University, Wenzhou 325000, China
- Department of Anesthesia and Critical Care, The Second Affiliated Hospital and Yuying Children’s Hospital, Wenzhou Medical University, Wenzhou 325000, China
| | - Jian Zhang
- National Clinical Eye Research Center, Eye Hospital, Wenzhou Medical University, Wenzhou 325000, China
- Department of Anesthesia and Critical Care, The Second Affiliated Hospital and Yuying Children’s Hospital, Wenzhou Medical University, Wenzhou 325000, China
| | - Zhuo Sun
- Institute for Advanced Study on Eye Health and Diseases, Wenzhou Medical University, Wenzhou 325000, China
| | - Yun Yin
- Faculty of Business and Health Science Institute, City University of Macau, Macau 999078, China
| | | | - Alexandre Loupy
- Université Paris Cité, INSERM U970 PARCC, Paris Institute for Transplantation and Organ Regeneration, 75015 Paris, France
| | - Stephan Beck
- Cancer Institute, University College London, WC1E 6BT London, UK
| | - Jia Qu
- National Clinical Eye Research Center, Eye Hospital, Wenzhou Medical University, Wenzhou 325000, China
- Institute for Clinical Data Science, Wenzhou Medical University, Wenzhou 325000, China
| | - Joseph Wu
- Cardiovascular Research Institute, Stanford University, Standford, CA 94305, USA
| | - International Consortium of Digital Twins in Medicine
- National Clinical Eye Research Center, Eye Hospital, Wenzhou Medical University, Wenzhou 325000, China
- Institute for Clinical Data Science, Wenzhou Medical University, Wenzhou 325000, China
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau 999078, China
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA 02138, USA
- Department of Big Data and Biomedical AI, College of Future Technology, Peking University, Beijing 100000, China
- Cancer Institute, University College London, WC1E 6BT London, UK
- NYU Langone Medical Center, New York University, New York, NY 10016, USA
- Department of Chemical Engineering and Nanoengineering, University of California San Diego, San Diego, CA 92093, USA
- Department of Anesthesia and Critical Care, The Second Affiliated Hospital and Yuying Children’s Hospital, Wenzhou Medical University, Wenzhou 325000, China
- Institute for Advanced Study on Eye Health and Diseases, Wenzhou Medical University, Wenzhou 325000, China
- Faculty of Business and Health Science Institute, City University of Macau, Macau 999078, China
- Zoi Capital, New York, NY 10013, USA
- Université Paris Cité, INSERM U970 PARCC, Paris Institute for Transplantation and Organ Regeneration, 75015 Paris, France
- Cardiovascular Research Institute, Stanford University, Standford, CA 94305, USA
- School of Medicine, University of Dundee, DD1 9SY Dundee, UK
| |
Collapse
|
49
|
Bhugra D, Liebrenz M, Ventriglio A, Ng R, Javed A, Kar A, Chumakov E, Moura H, Tolentino E, Gupta S, Ruiz R, Okasha T, Chisolm MS, Castaldelli-Maia J, Torales J, Smith A. World Psychiatric Association-Asian Journal of Psychiatry Commission on Public Mental Health. Asian J Psychiatr 2024; 98:104105. [PMID: 38861790 DOI: 10.1016/j.ajp.2024.104105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 04/22/2024] [Accepted: 05/31/2024] [Indexed: 06/13/2024]
Abstract
Although there is considerable evidence showing that the prevention of mental illnesses and adverse outcomes and mental health promotion can help people lead better and more functional lives, public mental health remains overlooked in the broader contexts of psychiatry and public health. Likewise, in undergraduate and postgraduate medical curricula, prevention and mental health promotion have often been ignored. However, there has been a recent increase in interest in public mental health, including an emphasis on the prevention of psychiatric disorders and improving individual and community wellbeing to support life trajectories, from childhood through to adulthood and into older age. These lifespan approaches have significant potential to reduce the onset of mental illnesses and the related burdens for the individual and communities, as well as mitigating social, economic, and political costs. Informed by principles of social justice and respect for human rights, this may be especially important for addressing salient problems in communities with distinct vulnerabilities, where prominent disadvantages and barriers for care delivery exist. Therefore, this Commission aims to address these topics, providing a narrative overview of relevant literature and suggesting ways forward. Additionally, proposals for improving mental health and preventing mental illnesses and adverse outcomes are presented, particularly amongst at-risk populations.
Collapse
Affiliation(s)
- Dinesh Bhugra
- Institute of Psychiatry, Psychology and Neurosciences, Kings College, London SE5 8AF, United Kingdom.
| | - Michael Liebrenz
- Department of Forensic Psychiatry, University of Bern, Bern, Switzerland
| | | | - Roger Ng
- World Psychiatric Association, Geneva, Switzerland
| | | | - Anindya Kar
- Advanced Neuropsychiatry Institute, Kolkata, India
| | - Egor Chumakov
- Department of Psychiatry & Addiction, St Petersburg State University, St Petersburg, Russia
| | | | | | - Susham Gupta
- East London NHS Foundation Trust, London, United Kingdom
| | - Roxanna Ruiz
- University of Francisco Moaroquin, Guatemala City, Guatemala
| | | | | | | | | | - Alexander Smith
- Department of Forensic Psychiatry, University of Bern, Bern, Switzerland
| |
Collapse
|
50
|
Laymouna M, Ma Y, Lessard D, Schuster T, Engler K, Lebouché B. Roles, Users, Benefits, and Limitations of Chatbots in Health Care: Rapid Review. J Med Internet Res 2024; 26:e56930. [PMID: 39042446 PMCID: PMC11303905 DOI: 10.2196/56930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 04/07/2024] [Accepted: 04/12/2024] [Indexed: 07/24/2024] Open
Abstract
BACKGROUND Chatbots, or conversational agents, have emerged as significant tools in health care, driven by advancements in artificial intelligence and digital technology. These programs are designed to simulate human conversations, addressing various health care needs. However, no comprehensive synthesis of health care chatbots' roles, users, benefits, and limitations is available to inform future research and application in the field. OBJECTIVE This review aims to describe health care chatbots' characteristics, focusing on their diverse roles in the health care pathway, user groups, benefits, and limitations. METHODS A rapid review of published literature from 2017 to 2023 was performed with a search strategy developed in collaboration with a health sciences librarian and implemented in the MEDLINE and Embase databases. Primary research studies reporting on chatbot roles or benefits in health care were included. Two reviewers dual-screened the search results. Extracted data on chatbot roles, users, benefits, and limitations were subjected to content analysis. RESULTS The review categorized chatbot roles into 2 themes: delivery of remote health services, including patient support, care management, education, skills building, and health behavior promotion, and provision of administrative assistance to health care providers. User groups spanned across patients with chronic conditions as well as patients with cancer; individuals focused on lifestyle improvements; and various demographic groups such as women, families, and older adults. Professionals and students in health care also emerged as significant users, alongside groups seeking mental health support, behavioral change, and educational enhancement. The benefits of health care chatbots were also classified into 2 themes: improvement of health care quality and efficiency and cost-effectiveness in health care delivery. The identified limitations encompassed ethical challenges, medicolegal and safety concerns, technical difficulties, user experience issues, and societal and economic impacts. CONCLUSIONS Health care chatbots offer a wide spectrum of applications, potentially impacting various aspects of health care. While they are promising tools for improving health care efficiency and quality, their integration into the health care system must be approached with consideration of their limitations to ensure optimal, safe, and equitable use.
Collapse
Affiliation(s)
- Moustafa Laymouna
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
| | - Yuanchao Ma
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
- Department of Biomedical Engineering, Polytechnique Montréal, Montreal, QC, Canada
| | - David Lessard
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| | - Tibor Schuster
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Kim Engler
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| | - Bertrand Lebouché
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| |
Collapse
|