1
|
Tarazi A, Aburrub A, Hijah M. Use of artificial intelligence in neurological disorders diagnosis: A scientometric study. World J Methodol 2025; 15:99403. [DOI: 10.5662/wjm.v15.i3.99403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/21/2024] [Revised: 12/03/2024] [Accepted: 12/23/2024] [Indexed: 03/06/2025] Open
Abstract
BACKGROUND Artificial intelligence (AI) has become significantly integrated into healthcare, particularly in the diagnosing of neurological disorders. This advancement has enabled neurologists and physicians to diagnose conditions more quickly and effectively, ultimately benefiting patients.
AIM To explore the current status and key highlights of AI-related articles in diagnosing of neurological disorders.
METHODS A systematic literature review was conducted in the Web of Science Core Collection database using the following strategy: TS = ("Artificial Intelligence" OR "Computational Intelligence" OR "Machine Learning" OR "AI") AND TS = ("Neurological disorders" OR "CNS disorder" AND "diagnosis"). The search was limited to articles and reviews. Microsoft Excel 2019 and VOSviewer were utilized to identify major contributors, including authors, institutions, countries, and journals. Additionally, VOSviewer was employed to analyze and visualize current trends and hot topics through network visualization maps.
RESULTS A total of 276 publications from 2000 to 2024 were retrieved. The United States, India, and China emerged as the top contributors in this field. Major institutions included Johns Hopkins University, King's College London, and Harvard Medical School. The most prolific author was U. Rajendra Acharya from the University of Southern Queensland (Australia). Among journals, IEEE Access, Scientific Reports, and Sensors were the most productive, while Frontiers in Neuroscience led in total citations. Central topics in AI-related articles on neurological disorders diagnosis included Alzheimer's disease, Parkinson's disease, dementia, epilepsy, autism, attention deficit hyperactivity disorder, and their intersections with deep learning and AI.
CONCLUSION Research on AI's role in diagnosing neurological disorders is becoming widely recognized for its growing importance. AI shows promise in diagnosing various neurological disorders, yet requires further improvement and extensive future research.
Collapse
Affiliation(s)
- Alaa Tarazi
- School of Medicine, University of Jordan, Amman 11942, Jordan
| | - Ahmad Aburrub
- School of Medicine, University of Jordan, Amman 11942, Jordan
| | - Mohammad Hijah
- School of Medicine, University of Jordan, Amman 11942, Jordan
| |
Collapse
|
2
|
Martínez-Martínez H, Martínez-Alfonso J, Sánchez-Rojo-Huertas B, Reynolds-Cortez V, Turégano-Chumillas A, Meseguer-Ruiz VA, Cekrezi S, Martínez-Vizcaíno V. Perceptions of, Barriers to, and Facilitators of the Use of AI in Primary Care: Systematic Review of Qualitative Studies. J Med Internet Res 2025; 27:e71186. [PMID: 40560641 DOI: 10.2196/71186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2025] [Revised: 04/19/2025] [Accepted: 05/14/2025] [Indexed: 06/28/2025] Open
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to transform primary care by reducing the considerable bureaucratic burden. However, clinicians and patients share concerns regarding data privacy, security, and potential biases in AI algorithms. OBJECTIVE This study aimed to provide an in-depth understanding of primary care professionals' and patients' perceptions of, barriers to, and facilitators of the use of AI in primary care. METHODS We conducted a systematic review of qualitative studies using MEDLINE (via PubMed), Web of Science, and Scopus databases from inception to June 9, 2024. We used the Sample, Phenomenon of Interest, Design, Evaluation, and Research Type (SPIDER) tool to design the systematic search strategy for qualitative studies. Eligible studies included qualitative analyses-based on interviews, focus groups, or similar methods-of perceptions of, barriers to, and facilitators of the use of AI in primary care, involving primary care professionals or patients. Exclusion criteria included studies on clinical decision support systems, reviews, commentaries, editorials, conference abstracts, and non-English or non-Spanish publications. Methodological quality was assessed using the Joanna Briggs Institute checklist. A thematic synthesis approach was used to structure the results, and the Grading of Recommendations Assessment, Development, and Evaluation-Confidence in the Evidence From Reviews of Qualitative Research (GRADE-CERQual) tool was used to assess the confidence in each finding. RESULTS We analyzed 316 participants, including primary care physicians, patients, and other health care professionals, from 13 studies across 6 countries selected from 942 screened records. We identified four analytical themes using thematic synthesis: (1) change in the physician-patient relationship, highlighting concerns about loss of empathy, connection, and trust; (2) AI as a partner for efficient time and information management, including its potential to improve workflow and decision-making, alongside skepticism about increased workload; (3) data as the cornerstone of AI development, reflecting concerns about data privacy, quality, bias, and corporate responsibility; and (4) barriers to and facilitators of AI in primary care, emphasizing equity, accessibility, and stakeholder co-design. The GRADE-CERQual assessment provided high confidence in all themes except theme 4, which was rated as moderate confidence. CONCLUSIONS This meta-synthesis includes the perspectives of primary care physicians and patients, but further research is needed on the perspectives of other professionals. Moreover, there was heterogeneity in methods and sampling strategies. This first systematic review synthesizing qualitative evidence on AI perceptions in primary care provides a comprehensive understanding of related barriers and facilitators. The key themes identified suggest that AI may help address workload, decision-making, and data management, improving health care efficiency while ensuring ethical, patient-centered care. TRIAL REGISTRATION PROSPERO CRD42024560048; https://www.crd.york.ac.uk/PROSPERO/view/CRD42024560048.
Collapse
Affiliation(s)
- Héctor Martínez-Martínez
- Health and Social Research Center, Universidad de Castilla-La Mancha, Cuenca, Spain
- Integrated Healthcare Service. Primary Care Center Zona 7-Feria, Albacete, Spain
| | | | | | - Valeria Reynolds-Cortez
- Health and Social Research Center, Universidad de Castilla-La Mancha, Cuenca, Spain
- Preventive Medicine, Hospital Virgen de la Luz, Cuenca, Spain
| | | | | | - Shkelzen Cekrezi
- Health and Social Research Center, Universidad de Castilla-La Mancha, Cuenca, Spain
| | - Vicente Martínez-Vizcaíno
- Health and Social Research Center, Universidad de Castilla-La Mancha, Cuenca, Spain
- Faculty of Health Sciences, Universidad Autónoma de Chile, Talca, Chile
| |
Collapse
|
3
|
Ran AR, Lui CH, Tham YC, Cheng CY, Lam CY, Cheung WL, Chan ST, Ma HN, Chow RWLC, Yang D, Tang Z, Liu TYA, Tham CC, Cheung CY. The acceptance of ophthalmic artificial intelligence for eye diseases: a literature review and qualitative analysis. Eye (Lond) 2025:10.1038/s41433-025-03878-z. [PMID: 40514441 DOI: 10.1038/s41433-025-03878-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2024] [Revised: 05/20/2025] [Accepted: 06/03/2025] [Indexed: 06/16/2025] Open
Abstract
Thorough investigations of end-users' awareness, acceptance, and concerns about ophthalmic artificial intelligence (AI) are essential to ensure its successful implementation. We conducted a literature review on the acceptance of ophthalmic AI to provide an overall insight and qualitatively analysed the quality of eligible studies using a psychological model. We identified sixteen studies and evaluated these studies based on four primary factors (i.e., performance expectancy, effort expectancy, social influence, and facilitating conditions) and four regulating factors (i.e., gender, age, experiences, and voluntariness of use) of the psychological model. We found that most of the eligible studies only emphasized performance expectancy and effort expectancy, and in-depth discussions on the effects of social influence, facilitating conditions, and relevant regulating factors were relatively inadequate. The overall acceptance of ophthalmic AI among specific groups, such as patients with different eye diseases, experts in ophthalmology, professionals in other fields, and the general population, is high. Nevertheless, more well-designed qualitative studies with clear definitions of acceptance and using proper psychological models with larger sample sizes involving other representative and multidisciplinary stakeholders worldwide are still warranted. In addition, because of the multifarious concerns of AI, such as the economic burden, patient privacy, model safety, model trustworthiness, public awareness, and proper regulations over accountability issues, it is imperative to focus on evidence-based medicine, conduct high-quality randomized controlled trials, and promote patient education. Comprehensive clinician training, privacy-preserving technologies, and the issue of cost-effectiveness are also indispensable to address the above concerns and further propel the overall acceptance of ophthalmic AI.
Collapse
Affiliation(s)
- An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment and Research Centre, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Chun Ho Lui
- Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Yih-Chung Tham
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
| | - Ching-Yu Cheng
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
| | - Chiu Yu Lam
- Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Wai Lam Cheung
- Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Siu Ting Chan
- Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Hok Ngai Ma
- Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | | | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Ziqi Tang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - T Y Alvin Liu
- Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, USA
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment and Research Centre, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China.
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment and Research Centre, The Chinese University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
4
|
Fairbairn TA, Mullen L, Nicol E, Lip GYH, Schmitt M, Shaw M, Tidbury L, Kemp I, Crooks J, Burnside G, Sharma S, Chauhan A, Liew C, Waidyanatha S, Iyenger S, Beale A, Sunderji I, Greenwood JP, Motwani M, Reid A, Beattie A, Carter J, Haworth P, Bellenger N, Hudson B, Rodrigues J, Watson O, Venugopal V, Bull R, O'Kane P, Deshpande A, McCann GP, Duckett S, Mansoubi H, Parish V, Sehmi J, Rogers C, Mullen S, Weir-McCalL J. Implementation of a national AI technology program on cardiovascular outcomes and the health system. Nat Med 2025; 31:1903-1910. [PMID: 40186078 PMCID: PMC12176617 DOI: 10.1038/s41591-025-03620-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Accepted: 02/28/2025] [Indexed: 04/07/2025]
Abstract
Coronary artery disease (CAD) is a major cause of ill health and death worldwide. Coronary computed tomographic angiography (CCTA) is the first-line investigation to detect CAD in symptomatic patients. This diagnostic approach risks greater second-line heart tests and treatments at a cost to the patient and health system. The National Health Service funded use of an artificial intelligence (AI) diagnostic tool, computed tomography (CT)-derived fractional flow reserve (FFR-CT), in patients with chest pain to improve physician decision-making and reduce downstream tests. This observational cohort study assessed the impact of FFR-CT on cardiovascular outcomes by including all patients investigated with CCTA during the national AI implementation program at 27 hospitals (CCTA n = 90,553 and FFR-CT n = 7,863). FFR-CT was safe, with no difference in all-cause (n = 1,134 (3.2%) versus 1,612 (2.9%), adjusted-hazard ratio (aHR) 1.00 (0.93-1.08), P = 0.97) or cardiovascular mortality (n = 465 (1.3%) versus 617 (1.1%), aHR 0.96 (0.85-1.08), P = 0.48), while reducing invasive coronary angiograms (n = 5,720 (16%) versus 8,183 (14.9%), aHR 0.93 (0.90-0.97), P < 0.001) and noninvasive cardiac tests (189/1,000 patients versus 167/1,000), P < 0.001). Implementation of an AI-diagnostic tool as part of a health intervention program was safe and beneficial to the patient pathway and health system with fewer cardiac tests at 2 years.
Collapse
Affiliation(s)
- Timothy A Fairbairn
- Liverpool Centre for Cardiovascular Science, Liverpool Heart and Chest Hospital, Liverpool, UK.
- Institute of Life Course and Medical Sciences, Faculty of Health and Life Sciences, University of Liverpool, Liverpool, UK.
| | - Liam Mullen
- Liverpool Centre for Cardiovascular Science, Liverpool Heart and Chest Hospital, Liverpool, UK
| | - Edward Nicol
- Royal Brompton and Harefield Hospital, Guys and St Thomas' NHS Trust, London, UK
- Department of Cardiovascular Imaging, Faculty of Life Sciences and Medicine, Kings College London, London, UK
| | - Gregory Y H Lip
- Liverpool Centre for Cardiovascular Science, Liverpool Heart and Chest Hospital, Liverpool, UK
- Institute of Life Course and Medical Sciences, Faculty of Health and Life Sciences, University of Liverpool, Liverpool, UK
| | | | - Matthew Shaw
- Liverpool Centre for Cardiovascular Science, Liverpool Heart and Chest Hospital, Liverpool, UK
| | - Laurence Tidbury
- Liverpool Centre for Cardiovascular Science, Liverpool Heart and Chest Hospital, Liverpool, UK
| | - Ian Kemp
- Liverpool Centre for Cardiovascular Science, Liverpool Heart and Chest Hospital, Liverpool, UK
| | - Jennifer Crooks
- Liverpool Centre for Cardiovascular Science, Liverpool Heart and Chest Hospital, Liverpool, UK
| | - Girvan Burnside
- Institute of Population Health, Faculty of Health and Life Sciences, University of Liverpool, Liverpool, UK
| | - Sumeet Sharma
- Ashford and St Peters Hospital NHS Foundation Trust, London, UK
| | - Anoop Chauhan
- Blackpool Teaching Hospitals NHS Foundation Trusts, Blackpool, UK
| | - Chee Liew
- Blackpool Teaching Hospitals NHS Foundation Trusts, Blackpool, UK
| | | | - Sri Iyenger
- Frimley Health NHS Foundation Trust, Guildford, UK
| | - Andrew Beale
- Great Western Hospitals NHS Foundation Trust, Swindon, UK
| | | | - John P Greenwood
- Leeds Institute of Cardiovascular and Metabolic Medicine, University of Leeds and Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | - Manish Motwani
- Manchester University NHS Foundation Trust, Manchester, UK
| | - Anna Reid
- Manchester University NHS Foundation Trust, Manchester, UK
| | - Anna Beattie
- Newcastle Upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK
| | - Justin Carter
- North Tees and Hartlepool NHS Foundation Trust, Middlesbrough, UK
| | | | | | | | | | - Oliver Watson
- Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
| | | | - Russell Bull
- University Hospital Dorset NHS Trust, Bournemouth, UK
| | - Peter O'Kane
- University Hospital Dorset NHS Trust, Bournemouth, UK
| | | | - Gerald P McCann
- University Hospitals of Leicester NHS Trust, Leicester, UK
- University of Leicester, Leicester, UK
| | - Simon Duckett
- University Hospitals of North Midlands NHS Trust, Stoke-On-Trent, UK
| | - Hatef Mansoubi
- University Hospitals Sussex NHS Foundation Trust, Brighton, UK
| | - Victoria Parish
- University Hospitals Sussex NHS Foundation Trust, Brighton, UK
| | - Joban Sehmi
- West Hertfordshire Hospital NHS Trust, Watford, UK
| | | | | | - Jonathan Weir-McCalL
- Royal Brompton and Harefield Hospital, Guys and St Thomas' NHS Trust, London, UK
- Department of Cardiovascular Imaging, Faculty of Life Sciences and Medicine, Kings College London, London, UK
- Royal Papworth Hospital NHS Foundation Trust, Cambridge, UK
| |
Collapse
|
5
|
Moschogianis S, Darley S, Coulson T, Peek N, Cheraghi-Sohi S, Brown BC. Seven Opportunities for Artificial Intelligence in Primary Care Electronic Visits: Qualitative Study of Staff and Patient Views. Ann Fam Med 2025; 23:214-222. [PMID: 40425478 PMCID: PMC12120151 DOI: 10.1370/afm.240292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 11/12/2024] [Accepted: 01/20/2025] [Indexed: 05/29/2025] Open
Abstract
PURPOSE Increased workload associated with electronic visits (eVisits) in primary care could potentially be decreased by the use of artificial intelligence (AI); however, it is unknown whether this use of AI would be acceptable to staff and patients. We explored patient and primary care staff views on the use of and opportunities for AI during eVisits. METHODS We conducted semistructured interviews and focus groups with primary care staff (n = 16) and patients (n = 37) from primary care practices in northwest England and London (n = 14) using the Patchs eVisits system (Patchs Health Limited; www.patchs.ai) from May 2020 to September 2021. We analyzed verbatim transcripts using thematic analysis. RESULTS Misconceptions regarding AI were common, which led to initial reservations on its use during eVisits. Perceived potential AI benefits included decreased staff workload and faster response times for patients. Safety concerns stemmed from the complexity of primary care and fears of depersonalized service. The following 7 opportunities for AI during eVisits were identified: workflow, directing, prioritization, asking questions, writing assistance, providing self-help information, and face-to-face appointment booking. Despite staff concerns regarding patient acceptability, most patients welcomed the use of AI if it were used as an adjunct to (not replacement for) clinical judgment and could support them in getting help more quickly. Retention of clinical oversight and ongoing evaluation was key to staff acceptability. CONCLUSIONS Patients and staff welcomed the use of AI and identified 7 potential uses during eVisits to decrease staff workload and improve patient safety. Successful implementation will depend on clear communication from practices, demonstrating and monitoring safety, clarifying misconceptions, and reassuring that it will not replace humans.
Collapse
Affiliation(s)
- Susan Moschogianis
- School of Health Sciences, Division of Population Health, Health Services Research and Primary Care, University of Manchester, Manchester, United Kingdom
| | - Sarah Darley
- School of Health Sciences, Division of Population Health, Health Services Research and Primary Care, University of Manchester, Manchester, United Kingdom
| | - Tessa Coulson
- National Health Service Salford Clinical Commissioning Group, Salford, United Kingdom
| | - Niels Peek
- The Healthcare Improvement Institute (THIS), Department of Public Health and Primary Care, University of Cambridge, Cambridge, United Kingdom
| | - Sudeh Cheraghi-Sohi
- School of Health Sciences, Division of Population Health, Health Services Research and Primary Care, University of Manchester, Manchester, United Kingdom
- National Institute for Health and Care Research (NIHR), Greater Manchester Patient Safety Translational Research Centre, School of Health Sciences, University of Manchester, Manchester, United Kingdom
| | - Benjamin C Brown
- School of Health Sciences, Division of Population Health, Health Services Research and Primary Care, University of Manchester, Manchester, United Kingdom
- Patchs Health, London, United Kingdom
| |
Collapse
|
6
|
Coukan F, Thamm W, Afolabi F, Murray KK, Rathbone AP, Saunders J, Atchison C, Ward H. Co-designing interventions with multiple stakeholders to address barriers and promote equitable access to HIV Pre-Exposure Prophylaxis (PrEP) in Black women in England. BMC Public Health 2025; 25:1831. [PMID: 40382625 PMCID: PMC12085007 DOI: 10.1186/s12889-025-23023-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2025] [Accepted: 05/02/2025] [Indexed: 05/20/2025] Open
Abstract
BACKGROUND Black women are among the populations most underserved by HIV pre-exposure prophylaxis (PrEP) in England, despite higher risk of HIV acquisition. Previous research mostly focused on men who have sex with men (MSM), often neglecting Black women, and overfocused on patient-level barriers while overlooking provider and system-level factors. This study addresses these gaps by investigating barriers and facilitators to PrEP access by involving multiple stakeholders and exploring co-design strategies to tackle these barriers. METHODS The study used a structured two-phased qualitative approach. In Phase 1, focus groups (FG) were undertaken across three stakeholder streams: Black women, healthcare professionals (HCPs), and a group combining Black women and HCPs. FG allowed for consensus-building exercises on key barriers and facilitators to PrEP access, and their transcripts were analysed via thematic framework analysis using the Capability, Opportunity, Motivation and Behaviour model of behaviour change. In Phase 2, co-design workshops were conducted with the same stakeholder groups to develop interventions targeting the barrier identified as most important using the Behaviour Change Wheel framework. Interventions were evaluated against the APEASE criteria. RESULTS Phase 1 identified six key barriers: HIV/PrEP knowledge gaps, restrictive policies, cultural stigma, healthcare system distrust, gendered relationship dynamics, and suboptimal PrEP use. Six facilitators emerged, including improved knowledge, increased accessibility, and addressing discrimination. All stakeholder groups voted for lack of awareness and knowledge as the priority barrier to address. All co-designed interventions consisted of a multimodal PrEP awareness campaign tailored to Black communities, with an emphasis on Black women's involvement to foster trust and engagement. However, the workshops produced different approaches, with Black women focusing on community-led initiatives, and HCPs advocating for government-backed, broader strategies despite known distrust of institutions. CONCLUSIONS This study highlights the importance of co-designing interventions with Black women to address multi-level barriers to PrEP access. It underscores the need for community education, healthcare system reforms, and the inclusion of Black women in decision-making processes to reduce PrEP equity gaps. The co-designed interventions provided a tailored, context-specific strategy that could improve PrEP uptake among Black women in England.
Collapse
Affiliation(s)
- Flavien Coukan
- National Institute for Health Research Applied Research Collaboration North West London, Chelsea and Westminster Hospital, London, UK.
- Patient Experience Research Centre, School of Public Health, Imperial College London, White City Campus, 90 Wood Lane, London, W12 7TA, UK.
| | - Wezi Thamm
- School of Pharmacy, Newcastle University, Newcastle Upon Tyne, UK
- The Sophia Forum, London, UK
- Hillingdon AIDS Response Trust (HART), London, UK
| | - Fola Afolabi
- Youth Involvement and Engagement Lab, London, UK
| | - Keitumetse-Kabelo Murray
- National Institute for Health Research Applied Research Collaboration North West London, Chelsea and Westminster Hospital, London, UK
- Patient Experience Research Centre, School of Public Health, Imperial College London, White City Campus, 90 Wood Lane, London, W12 7TA, UK
| | | | - John Saunders
- Blood Safety, Hepatitis, Sexually Transmitted Infections (STI) and HIV Division, UK Health Security Agency, London, UK
| | - Christina Atchison
- Patient Experience Research Centre, School of Public Health, Imperial College London, White City Campus, 90 Wood Lane, London, W12 7TA, UK
- National Institute for Health Research Imperial Biomedical Research Centre, London, UK
| | - Helen Ward
- National Institute for Health Research Applied Research Collaboration North West London, Chelsea and Westminster Hospital, London, UK
- Patient Experience Research Centre, School of Public Health, Imperial College London, White City Campus, 90 Wood Lane, London, W12 7TA, UK
- National Institute for Health Research Imperial Biomedical Research Centre, London, UK
| |
Collapse
|
7
|
Arbelaez Ossa L, Rost M, Bont N, Lorenzini G, Shaw D, Elger BS. Exploring Patient Participation in AI-Supported Health Care: Qualitative Study. JMIR AI 2025; 4:e50781. [PMID: 40324765 PMCID: PMC12089863 DOI: 10.2196/50781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 10/30/2024] [Accepted: 03/15/2025] [Indexed: 05/07/2025]
Abstract
BACKGROUND The introduction of artificial intelligence (AI) into health care has sparked discussions about its potential impact. Patients, as key stakeholders, will be at the forefront of interacting with and being impacted by AI. Given the ethical importance of patient-centered health care, patients must navigate how they engage with AI. However, integrating AI into clinical practice brings potential challenges, particularly in shared decision-making and ensuring patients remain active participants in their care. Whether AI-supported interventions empower or undermine patient participation depends largely on how these technologies are envisioned and integrated into practice. OBJECTIVE This study explores how patients and medical AI professionals perceive the patient's role and the factors shaping participation in AI-supported care. METHODS We conducted qualitative semistructured interviews with 21 patients and 21 medical AI professionals from different disciplinary backgrounds. Data were analyzed using reflexive thematic analysis. We identified 3 themes to describe how patients and professionals describe factors that shape participation in AI-supported care. RESULTS The first theme explored the vision of AI as an unavoidable and potentially harmful force of change in health care. The second theme highlights how patients perceive limitations in their capabilities that may prevent them from meaningfully participating in AI-supported care. The third theme describes patients' adaptive responses, such as relying on experts or making value judgments leading to acceptance or rejection of AI-supported care. CONCLUSIONS Both external and internal preconceptions influence how patients and medical AI professionals perceive patient participation. Patients often internalize AI's complexity and inevitability as an obstacle to their active participation, leading them to feel they have little influence over its development. While some patients rely on doctors or see AI as something to accept or reject, these strategies risk placing them in a disempowering role as passive recipients of care. Without adequate education on their rights and possibilities, these responses may not be enough to position patients at the center of their care.
Collapse
Affiliation(s)
| | - Michael Rost
- Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
| | - Nathalie Bont
- Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
| | - Giorgia Lorenzini
- Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
| | - David Shaw
- Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
- Care and Public Health Research Institute, Maastricht University, Maastricht, The Netherlands
| | - Bernice Simone Elger
- Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
- Center for Legal Medicine (CURML), University of Geneva, Geneva, Switzerland
| |
Collapse
|
8
|
Berkstresser AM, Hanchard SEL, Iacaboni D, McMilian K, Duong D, Solomon BD, Waikel RL. Artificial intelligence in clinical genetics: current practice and attitudes among the clinical genetics workforce. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2025:2025.04.30.25326673. [PMID: 40343038 PMCID: PMC12060961 DOI: 10.1101/2025.04.30.25326673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 05/11/2025]
Abstract
Purpose Artificial intelligence (AI) applications for clinical genetics hold the potential to improve patient care through supporting diagnostics and management as well as automating administrative tasks, thus enhancing and potentially enabling clinician/patient interactions. While the introduction of AI into clinical genetics is increasing, there remain unclear questions about risks and benefits, and the readiness of the workforce. Methods To assess the current clinical genetics workforce's use, knowledge, and attitudes toward available medical AI applications, we conducted a survey involving 215 US-based genetics clinicians and trainees. Results Over half (51.2%) of participants report little to no knowledge of AI in clinical genetics and 64.3% reported no formal training in AI applications. Formal training directly correlated with self-reported knowledge of AI in clinical genetics, with 69.3% of respondents with formal training reporting intermediate to extensive knowledge of AI vs. 37.5% without formal training. Most participants reported that they lacked sufficient knowledge of clinical AI (83.4%) and agreed that there should be more education in this area (97.6%) and would take a course if offered (89.3%). The majority (51.6%) of clinician participants said they never used AI applications in the clinic. However, after a tutorial describing clinical AI applications, 75.8% reported some use of AI applications in the clinic. When asked specifically about clinical AI application usage, the majority of clinician participants used facial diagnostic applications (54.9%) and AI-generated genomic testing results (62.1%), whereas other applications such as chatbots, large language models (LLMs), pedigree or medical summary generators, and risk assessment were only used by a fraction of the clinicians, ranging from 11.1 to 12.5%. Nearly all participants (94.6%) reported clinical genetics professionals as being overburdened. Conclusion Further clinician education is both desired and needed to optimally utilize clinical AI applications with the potential to enhance patient care and alleviate the current strain on genetics clinics.
Collapse
Affiliation(s)
- Amanda M Berkstresser
- Genetic Counseling Program, School of Health & Natural Sciences, Bay Path University, Longmeadow, Massachusetts, United States of America
| | | | - Daniela Iacaboni
- Genetic Counseling Program, School of Health & Natural Sciences, Bay Path University, Longmeadow, Massachusetts, United States of America
| | - Kevin McMilian
- Cumoratek Consulting, Kansas City, Missouri, United States of America
| | - Dat Duong
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland, United States of America
| | - Benjamin D. Solomon
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland, United States of America
| | - Rebekah L. Waikel
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland, United States of America
| |
Collapse
|
9
|
Xie SJ, Spice C, Wedgeworth P, Langevin R, Lybarger K, Singh AP, Wood BR, Klein JW, Hsieh G, Duber HC, Hartzler AL. Patient and clinician acceptability of automated extraction of social drivers of health from clinical notes in primary care. J Am Med Inform Assoc 2025; 32:855-865. [PMID: 40085013 PMCID: PMC12012364 DOI: 10.1093/jamia/ocaf046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2024] [Revised: 02/26/2025] [Accepted: 03/05/2025] [Indexed: 03/16/2025] Open
Abstract
OBJECTIVE Artificial Intelligence (AI)-based approaches for extracting Social Drivers of Health (SDoH) from clinical notes offer healthcare systems an efficient way to identify patients' social needs, yet we know little about the acceptability of this approach to patients and clinicians. We investigated patient and clinician acceptability through interviews. MATERIALS AND METHODS We interviewed primary care patients experiencing social needs (n = 19) and clinicians (n = 14) about their acceptability of "SDoH autosuggest," an AI-based approach for extracting SDoH from clinical notes. We presented storyboards depicting the approach and asked participants to rate their acceptability and discuss their rationale. RESULTS Participants rated SDoH autosuggest moderately acceptable (mean = 3.9/5 patients; mean = 3.6/5 clinicians). Patients' ratings varied across domains, with substance use rated most and employment rated least acceptable. Both groups raised concern about information integrity, actionability, impact on clinical interactions and relationships, and privacy. In addition, patients raised concern about transparency, autonomy, and potential harm, whereas clinicians raised concern about usability. DISCUSSION Despite reporting moderate acceptability of the envisioned approach, patients and clinicians expressed multiple concerns about AI systems that extract SDoH. Participants emphasized the need for high-quality data, non-intrusive presentation methods, and clear communication strategies regarding sensitive social needs. Findings underscore the importance of engaging patients and clinicians to mitigate unintended consequences when integrating AI approaches into care. CONCLUSION Although AI approaches like SDoH autosuggest hold promise for efficiently identifying SDoH from clinical notes, they must also account for concerns of patients and clinicians to ensure these systems are acceptable and do not undermine trust.
Collapse
Affiliation(s)
- Serena Jinchen Xie
- Biomedical Informatics and Medical Education, School of Medicine, University of Washington, Seattle, WA 98195, United States
| | - Carolin Spice
- Biomedical Informatics and Medical Education, School of Medicine, University of Washington, Seattle, WA 98195, United States
| | - Patrick Wedgeworth
- Biomedical Informatics and Medical Education, School of Medicine, University of Washington, Seattle, WA 98195, United States
| | - Raina Langevin
- Biomedical Informatics and Medical Education, School of Medicine, University of Washington, Seattle, WA 98195, United States
| | - Kevin Lybarger
- Information Sciences and Technology, George Mason University, Fairfax, VA 22030, United States
| | - Angad Preet Singh
- Department of Medicine, University of Washington, Seattle, WA 98195, United States
| | - Brian R Wood
- Department of Medicine, University of Washington, Seattle, WA 98195, United States
| | - Jared W Klein
- Department of Medicine, University of Washington, School of Medicine, Seattle, WA 98195, United States
| | - Gary Hsieh
- Human Centered Design & Engineering, University of Washington, Seattle, WA 98195, United States
| | - Herbert C Duber
- Washington State Department of Health, Olympia, WA 98501, United States
- Department of Emergency Medicine, University of Washington, Seattle, WA 98195, United States
| | - Andrea L Hartzler
- Biomedical Informatics and Medical Education, School of Medicine, University of Washington, Seattle, WA 98195, United States
| |
Collapse
|
10
|
Salybekov AA, Yerkos A, Sedlmayr M, Wolfien M. Ethics and Algorithms to Navigate AI's Emerging Role in Organ Transplantation. J Clin Med 2025; 14:2775. [PMID: 40283605 PMCID: PMC12027807 DOI: 10.3390/jcm14082775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2025] [Revised: 04/14/2025] [Accepted: 04/15/2025] [Indexed: 04/29/2025] Open
Abstract
Background/Objectives: Solid organ transplantation remains a critical life-saving treatment for end-stage organ failure, yet it faces persistent challenges, such as organ scarcity, graft rejection, and postoperative complications. Artificial intelligence (AI) has the potential to address these challenges by revolutionizing transplantation practices. Methods: This review article explores the diverse applications of AI in solid organ transplantation, focusing on its impact on diagnostics, treatment, and the evolving market landscape. We discuss how machine learning, deep learning, and generative AI are harnessing vast datasets to predict transplant outcomes, personalized immunosuppressive regimens, and optimize patient selection. Additionally, we examine the ethical implications of AI in transplantation and highlight promising AI-driven innovations nearing FDA evaluation. Results: AI improves organ allocation processes, refines predictions for transplant outcomes, and enables tailored immunosuppressive regimens. These advancements contribute to better patient selection and enhance overall transplant success rates. Conclusions: By bridging the gap in organ availability and improving long-term transplant success, AI holds promise to significantly advance the field of solid organ transplantation.
Collapse
Affiliation(s)
- Amankeldi A. Salybekov
- Regenerative Medicine Division, Cell and Gene Therapy Department, Qazaq Institute of Innovative Medicine, Astana 020000, Kazakhstan
| | - Ainur Yerkos
- Department of Computer Science, Al-Farabi Kazakh National University, Almaty 050040, Kazakhstan;
| | - Martin Sedlmayr
- Institute for Medical Informatics and Biometry, Faculty of Medicine Carl Gustav Carus, TUD Dresden University of Technology, 01069 Dresden, Germany;
| | - Markus Wolfien
- Institute for Medical Informatics and Biometry, Faculty of Medicine Carl Gustav Carus, TUD Dresden University of Technology, 01069 Dresden, Germany;
- Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI), 01069 Dresden, Germany
| |
Collapse
|
11
|
Stroud AM, Minteer SA, Zhu X, Ridgeway JL, Miller JE, Barry BA. Patient information needs for transparent and trustworthy cardiovascular artificial intelligence: A qualitative study. PLOS DIGITAL HEALTH 2025; 4:e0000826. [PMID: 40258073 PMCID: PMC12011294 DOI: 10.1371/journal.pdig.0000826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Accepted: 03/17/2025] [Indexed: 04/23/2025]
Abstract
As health systems incorporate artificial intelligence (AI) into various aspects of patient care, there is growing interest in understanding how to ensure transparent and trustworthy implementation. However, little attention has been given to what information patients need about these technologies to promote transparency of their use. We conducted three asynchronous online focus groups with 42 patients across the United States discussing perspectives on their information needs for trust and uptake of AI, focusing on its use in cardiovascular care. Data were analyzed using a rapid content analysis approach. Our results suggest that patients have a set of core information needs, including specific information factors pertaining to the AI tool, oversight, and healthcare experience, that are relevant to calibrating trust as well as perspectives concerning information delivery, disclosure, consent, and physician AI use. Identifying patient information needs is a critical starting point for calibrating trust in healthcare AI systems and designing strategies for information delivery. These findings highlight the importance of patient-centered engagement when developing AI model documentation and communicating and provisioning information about these technologies in clinical encounters.
Collapse
Affiliation(s)
- Austin M. Stroud
- Biomedical Ethics Program, Mayo Clinic, Rochester, Minnesota, United States of America
| | - Sarah A. Minteer
- Physical Medicine and Rehabilitation Research, Mayo Clinic, Rochester, Minnesota, United States of America
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, Minnesota, United States of America
| | - Xuan Zhu
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, Minnesota, United States of America
| | - Jennifer L. Ridgeway
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, Minnesota, United States of America
- Division of Health Care Delivery Research, Mayo Clinic, Rochester, Minnesota, United States of America
| | - Jennifer E. Miller
- Section of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, Connecticut, United States of America
| | - Barbara A. Barry
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, Minnesota, United States of America
- Division of Health Care Delivery Research, Mayo Clinic, Rochester, Minnesota, United States of America
| |
Collapse
|
12
|
Frost EK, Aquino YSJ, Braunack‐Mayer A, Carter SM. Understanding Public Judgements on Artificial Intelligence in Healthcare: Dialogue Group Findings From Australia. Health Expect 2025; 28:e70185. [PMID: 40150867 PMCID: PMC11949843 DOI: 10.1111/hex.70185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2024] [Revised: 01/29/2025] [Accepted: 02/04/2025] [Indexed: 03/29/2025] Open
Abstract
INTRODUCTION There is a rapidly increasing number of applications of healthcare artificial intelligence (HCAI). Alongside this, a new field of research is investigating public support for HCAI. We conducted a study to identify the conditions on Australians' support for HCAI, with an emphasis on identifying the instances where using AI in healthcare systems was seen as acceptable or unacceptable. METHODS We conducted eight dialogue groups with 47 Australians, aiming for diversity in age, gender, working status, and experience with information and communication technologies. The moderators encouraged participants to discuss the reasons and conditions for their support for AI in health care. RESULTS Most participants were conditionally supportive of HCAI. The participants felt strongly that AI should be developed, implemented and controlled with patient interests in mind. They supported HCAI principally as an informational tool and hoped that it would empower people by enabling greater access to personalised information about their health. They were opposed to HCAI as a decision-making tool or as a replacement for physician-patient interaction. CONCLUSION Our findings indicate that Australians support HCAI as a tool that enhances rather than replaces human decision-making in health care. Australians value HCAI as an epistemic tool that can expand access to personalised health information but remain cautious about its use in clinical decision-making. Developers of HCAI tools should consider Australians' preferences for AI tools that provide epistemic resources, and their aversion to tools which make decisions autonomously, or replace interactions with their physicians. PATIENT OR PUBLIC CONTRIBUTION Members of the public were participants in this study. The participants made contributions by sharing their views and judgements.
Collapse
Affiliation(s)
- Emma K. Frost
- Australian Centre for Health Engagement, Evidence and Values, School of Social Science, Faculty of the Arts, Social Science and HumanitiesUniversity of WollongongGwynnevilleNew South WalesAustralia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Social Science, Faculty of the Arts, Social Science and HumanitiesUniversity of WollongongGwynnevilleNew South WalesAustralia
| | - Annette Braunack‐Mayer
- Australian Centre for Health Engagement, Evidence and Values, School of Social Science, Faculty of the Arts, Social Science and HumanitiesUniversity of WollongongGwynnevilleNew South WalesAustralia
| | - Stacy M. Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Social Science, Faculty of the Arts, Social Science and HumanitiesUniversity of WollongongGwynnevilleNew South WalesAustralia
| |
Collapse
|
13
|
Khanna R, Raison N, Granados Martinez A, Ourselin S, Montorsi F, Briganti A, Dasgupta P. At the cutting edge: the potential of autonomous surgery and challenges faced. BMJ SURGERY, INTERVENTIONS, & HEALTH TECHNOLOGIES 2025; 7:e000338. [PMID: 40166699 PMCID: PMC11956393 DOI: 10.1136/bmjsit-2024-000338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2024] [Accepted: 02/18/2025] [Indexed: 04/02/2025] Open
Affiliation(s)
- Raghav Khanna
- Faculty of Life Sciences and Medicine, King’s College London, London, England, UK
| | - Nicholas Raison
- King’s College London Faculty of Life Sciences & Medicine, London, UK
- King’s College Hospital NHS Foundation Trust, London, UK
| | | | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | | | | | - Prokar Dasgupta
- King’s College London Faculty of Life Sciences & Medicine, London, UK
- Department of Urology, Guy's and St Thomas’ Hospitals NHS Trust, London, London, UK
| |
Collapse
|
14
|
Davis VH, Qiang JR, Adekoya MacCarthy I, Howse D, Seshie AZ, Kosowan L, Delahunty-Pike A, Abaga E, Cooney J, Robinson M, Senior D, Zsager A, Aubrey-Bassler K, Irwin M, Jackson LA, Katz A, Marshall EG, Muhajarine N, Neudorf C, Garies S, Pinto AD. Perspectives on Using Artificial Intelligence to Derive Social Determinants of Health Data From Medical Records in Canada: Large Multijurisdictional Qualitative Study. J Med Internet Res 2025; 27:e52244. [PMID: 40053728 PMCID: PMC11926464 DOI: 10.2196/52244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 10/31/2024] [Accepted: 11/29/2024] [Indexed: 03/09/2025] Open
Abstract
BACKGROUND Data on the social determinants of health could be used to improve care, support quality improvement initiatives, and track progress toward health equity. However, this data collection is not widespread. Artificial intelligence (AI), specifically natural language processing and machine learning, could be used to derive social determinants of health data from electronic medical records. This could reduce the time and resources required to obtain social determinants of health data. OBJECTIVE This study aimed to understand perspectives of a diverse sample of Canadians on the use of AI to derive social determinants of health information from electronic medical record data, including benefits and concerns. METHODS Using a qualitative description approach, in-depth interviews were conducted with 195 participants purposefully recruited from Ontario, Newfoundland and Labrador, Manitoba, and Saskatchewan. Transcripts were analyzed using an inductive and deductive content analysis. RESULTS A total of 4 themes were identified. First, AI was described as the inevitable future, facilitating more efficient, accessible social determinants of health information and use in primary care. Second, participants expressed concerns about potential health care harms and a distrust in AI and public systems. Third, some participants indicated that AI could lead to a loss of the human touch in health care, emphasizing a preference for strong relationships with providers and individualized care. Fourth, participants described the critical importance of consent and the need for strong safeguards to protect patient data and trust. CONCLUSIONS These findings provide important considerations for the use of AI in health care, and particularly when health care administrators and decision makers seek to derive social determinants of health data.
Collapse
Affiliation(s)
- Victoria H Davis
- Department of Health Behavior and Health Equity, School of Public Health, University of Michigan-Ann Arbor, Ann Arbor, MI, United States
| | - Jinfan Rose Qiang
- Upstream Lab, MAP Centre for Urban Health Solutions, Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, ON, Canada
| | - Itunuoluwa Adekoya MacCarthy
- Upstream Lab, MAP Centre for Urban Health Solutions, Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, ON, Canada
| | - Dana Howse
- Primary Healthcare Research Unit, Memorial University of Newfoundland and Labrador, St. John's, NL, Canada
| | - Abigail Zita Seshie
- Upstream Lab, MAP Centre for Urban Health Solutions, Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, ON, Canada
| | - Leanne Kosowan
- Department of Family Medicine, Rady Faculty of Health Sciences, University of Manitoba, Winnipeg, MB, Canada
| | | | - Eunice Abaga
- Upstream Lab, MAP Centre for Urban Health Solutions, Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, ON, Canada
| | - Jane Cooney
- Upstream Lab, MAP Centre for Urban Health Solutions, Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, ON, Canada
| | - Marjeiry Robinson
- Upstream Lab, MAP Centre for Urban Health Solutions, Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, ON, Canada
| | - Dorothy Senior
- Upstream Lab, MAP Centre for Urban Health Solutions, Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, ON, Canada
| | - Alexander Zsager
- Upstream Lab, MAP Centre for Urban Health Solutions, Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, ON, Canada
| | - Kris Aubrey-Bassler
- Primary Healthcare Research Unit, Memorial University of Newfoundland and Labrador, St. John's, NL, Canada
| | - Mandi Irwin
- Department of Family Medicine, Dalhousie University, Halifax, NS, Canada
| | - Lois A Jackson
- School of Health and Human Performance, Dalhousie University, Halifax, NS, Canada
| | - Alan Katz
- Department of Family Medicine, Rady Faculty of Health Sciences, University of Manitoba, Winnipeg, MB, Canada
| | | | - Nazeem Muhajarine
- Department of Community Health & Epidemiology, College of Medicine, University of Saskatchewan, Saskatoon, SK, Canada
| | - Cory Neudorf
- Department of Community Health & Epidemiology, College of Medicine, University of Saskatchewan, Saskatoon, SK, Canada
| | - Stephanie Garies
- Department of Family Medicine, University of Calgary, Calgary, Canada
| | - Andrew D Pinto
- Upstream Lab, MAP Centre for Urban Health Solutions, Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, ON, Canada
- Department of Family and Community Medicine, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Department of Family and Community Medicine, St. Michael's Hospital, Toronto, ON, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
15
|
Mohsin Khan M, Shah N, Shaikh N, Thabet A, Alrabayah T, Belkhair S. Towards secure and trusted AI in healthcare: A systematic review of emerging innovations and ethical challenges. Int J Med Inform 2025; 195:105780. [PMID: 39753062 DOI: 10.1016/j.ijmedinf.2024.105780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Revised: 12/21/2024] [Accepted: 12/27/2024] [Indexed: 02/12/2025]
Abstract
INTRODUCTION Artificial Intelligence is in the phase of health care, with transformative innovations in diagnostics, personalized treatment, and operational efficiency. While having potential, critical challenges are apparent in areas of safety, trust, security, and ethical governance. The development of these challenges is important for promoting the responsible adoption of AI technologies into healthcare systems. METHODS This systematic review of studies published between 2010 and 2023 addressed the applications of AI in healthcare and their implications for safety, transparency, and ethics. A comprehensive search was performed in PubMed, IEEE Xplore, Scopus, and Google Scholar. Those studies that met the inclusion criteria provided empirical evidence, theoretical insights, or systematic evaluations addressing trust, security, and ethical considerations. RESULTS The analysis brought out both the innovative technologies and the continued challenges. Explainable AI (XAI) emerged as one of the significant developments. It made it possible for healthcare professionals to understand AI-driven recommendations, by this means increasing transparency and trust. Still, challenges in adversarial attacks, algorithmic bias, and variable regulatory frameworks remain strong. According to several studies, more than 60 % of healthcare professionals have expressed their hesitation in adopting AI systems due to a lack of transparency and fear of data insecurity. Moreover, the 2024 WotNot data breach uncovered weaknesses in AI technologies and highlighted the dire requirement for robust cybersecurity. DISCUSSION Full understanding of the potential of AI will be possible only with putting into practice of ethical and technical maintains in healthcare systems. Effective strategies would include integrating bias mitigation methods, strengthening cybersecurity protocols to prevent breaches. Also by adopting interdisciplinary collaboration with the goal of forming transparent regulatory guidelines. These are very important steps toward earning trust and ensuring that AI systems are safe, reliable, and fair. CONCLUSION AI can bring transformative opportunities to improve healthcare outcomes, but successful implementation will depend on overcoming the challenges of trust, security, and ethics. Future research should focus on testing these technologies in multiple real-world settings, enhance their scalability, and fine-tune regulations to facilitate accountability. Only by combining technological innovations with ethical principles and strong governance can AI reshape healthcare, ensuring at the same time safety and trustworthiness.
Collapse
Affiliation(s)
| | - Noman Shah
- Neurosurgery Department, Abbottabad Medical Complex, Pakistan
| | - Nissar Shaikh
- Surgical Intensive Care Unit, Hamad General Hospital, Qatar
| | | | | | - Sirajeddin Belkhair
- Neurosurgery Department, Hamad General Hospital, Qatar; Department of Clinical Academic Sciences, College of Medicine, Qatar University, Doha, Qatar; Department of Neurological Sciences, Weill Cornell Medicine, Doha, Qatar
| |
Collapse
|
16
|
Johnson ES, Welch EK, Kikuchi J, Barbier H, Vaccaro CM, Balzano F, Dengler KL. Use of ChatGPT to Generate Informed Consent for Surgery in Urogynecology. UROGYNECOLOGY (PHILADELPHIA, PA.) 2025; 31:285-291. [PMID: 39823203 DOI: 10.1097/spv.0000000000001638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2025]
Abstract
IMPORTANCE Use of the publicly available Large Language Model, Chat Generative Pre-trained Transformer (ChatGPT 3.5; OpenAI, 2022), is growing in health care despite varying accuracies. OBJECTIVE The aim of this study was to assess the accuracy and readability of ChatGPT's responses to questions encompassing surgical informed consent in urogynecology. STUDY DESIGN Five fellowship-trained urogynecology attending physicians and 1 reconstructive female urologist evaluated ChatGPT's responses to questions about 4 surgical procedures: (1) retropubic midurethral sling, (2) total vaginal hysterectomy, (3) uterosacral ligament suspension, and (4) sacrocolpopexy. Questions involved procedure descriptions, risks/benefits/alternatives, and additional resources. Responses were rated using the DISCERN tool, a 4-point accuracy scale, and the Flesch-Kinkaid Grade Level score. RESULTS The median DISCERN tool overall rating was 3 (interquartile range [IQR], 3-4), indicating a moderate rating ("potentially important but not serious shortcomings"). Retropubic midurethral sling received the highest overall score (median, 4; IQR, 3-4), and uterosacral ligament suspension received the lowest (median, 3; IQR, 3-3). Using the 4-point accuracy scale, 44.0% of responses received a score of 4 ("correct and adequate"), 22.6% received a score of 3 ("correct but insufficient"), 29.8% received a score of 2 ("accurate and misleading information together"), and 3.6% received a score of 1 ("wrong or irrelevant answer"). ChatGPT performance was poor for discussion of benefits and alternatives for all surgical procedures, with some responses being inaccurate. The mean Flesch-Kinkaid Grade Level score for all responses was 17.5 (SD, 2.1), corresponding to a postgraduate reading level. CONCLUSIONS Overall, ChatGPT generated accurate responses to questions about surgical informed consent. However, it produced clearly false portions of responses, highlighting the need for a careful review of responses by qualified health care professionals.
Collapse
Affiliation(s)
- Emily S Johnson
- From the Division of Urogynecology, Walter Reed National Military Medical Center, Bethesda, MD
| | - Eva K Welch
- From the Division of Urogynecology, Walter Reed National Military Medical Center, Bethesda, MD
| | - Jacqueline Kikuchi
- Division of Urogynecology, AT Augusta Military Medical Center, Fort Belvoir, VA
| | - Heather Barbier
- From the Division of Urogynecology, Walter Reed National Military Medical Center, Bethesda, MD
| | - Christine M Vaccaro
- From the Division of Urogynecology, Walter Reed National Military Medical Center, Bethesda, MD
| | - Felicia Balzano
- Department of Urology, Walter Reed National Military Medical Center, Bethesda, MD
| | - Katherine L Dengler
- From the Division of Urogynecology, Walter Reed National Military Medical Center, Bethesda, MD
| |
Collapse
|
17
|
Hurley ME, Lang BH, Kostick-Quenet KM, Smith JN, Blumenthal-Barby J. Patient Consent and The Right to Notice and Explanation of AI Systems Used in Health Care. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2025; 25:102-114. [PMID: 39288291 DOI: 10.1080/15265161.2024.2399828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/19/2024]
Abstract
Given the need for enforceable guardrails for artificial intelligence (AI) that protect the public and allow for innovation, the U.S. Government recently issued a Blueprint for an AI Bill of Rights which outlines five principles of safe AI design, use, and implementation. One in particular, the right to notice and explanation, requires accurately informing the public about the use of AI that impacts them in ways that are easy to understand. Yet, in the healthcare setting, it is unclear what goal the right to notice and explanation serves, and the moral importance of patient-level disclosure. We propose three normative functions of this right: (1) to notify patients about their care, (2) to educate patients and promote trust, and (3) to meet standards for informed consent. Additional clarity is needed to guide practices that respect the right to notice and explanation of AI in healthcare while providing meaningful benefits to patients.
Collapse
|
18
|
Stroud AM, Anzabi MD, Wise JL, Barry BA, Malik MM, McGowan ML, Sharp RR. Toward Safe and Ethical Implementation of Health Care Artificial Intelligence: Insights From an Academic Medical Center. MAYO CLINIC PROCEEDINGS. DIGITAL HEALTH 2025; 3:100189. [PMID: 40206995 PMCID: PMC11975832 DOI: 10.1016/j.mcpdig.2024.100189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
Claims abound that advances in artificial intelligence (AI) will permeate virtually every aspect of medicine and transform clinical practice. Simultaneously, concerns about the safety and equity of health care AI have prompted ethical and regulatory scrutiny from multiple oversight bodies. Positioned at the intersection of these perspectives, academic medical centers (AMCs) are charged with navigating the safe and responsible implementation of health care AI. Decisions about the use of AI at AMCs are complicated by uncertainties regarding the risks posed by these technologies and a lack of consensus on best practices for managing these risks. In this article, we highlight several potential harms that may arise in the adoption of health care AI, with a focus on risks to patients, clinicians, and medical practice. In addition, we describe several strategies that AMCs might adopt now to address concerns about the safety and ethical uses of health care AI. Our analysis aims to support AMCs as they seek to balance AI innovation with proactive oversight.
Collapse
Affiliation(s)
| | | | - Journey L. Wise
- Biomedical Ethics Research Program, Mayo Clinic, Rochester, MN
| | - Barbara A. Barry
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN
| | | | | | | |
Collapse
|
19
|
Chua MT, Boon Y, Lee ZY, Kok JHJ, Lim CKW, Cheung NMT, Yong LPX, Kuan WS. The role of artificial intelligence in sepsis in the Emergency Department: a narrative review. ANNALS OF TRANSLATIONAL MEDICINE 2025; 13:4. [PMID: 40115064 PMCID: PMC11921180 DOI: 10.21037/atm-24-150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 08/10/2024] [Accepted: 12/16/2024] [Indexed: 03/23/2025]
Abstract
Background and Objective Early recognition and treatment of sepsis in the emergency department (ED) is important. Traditional predictive analytics and clinical decision rules lack accuracy in identifying patients with sepsis. Artificial intelligence (AI) is increasingly prevalent in healthcare and offers application potential in the care of patients with sepsis. This review examines the evidence of AI in diagnosing, managing and prognosticating sepsis in the ED. Methods We performed literature search in PubMed, Embase, Google Scholar and Scopus databases for studies published between 1 January 2010 and 30 June 2024 that evaluated the use of AI in adult patients with sepsis in ED, using the following search terms: ("artificial intelligence" OR "machine learning" OR "neural networks, computer" OR "deep learning" OR "natural language processing"), AND ("sepsis" OR "septic shock", AND "emergency services" OR "emergency department"). Independent searches were conducted in duplicate with discrepancies adjudicated by a third member. Key Content and Findings Incorporating multiple variables such as vital signs, free text input, laboratory tests and electrocardiogram was possible with AI compared to traditional models leading to improvement in diagnostic performance. Machine learning (ML) models outperformed traditional scoring tools in both diagnosis and prognosis of sepsis. ML models were able to analyze trends over time and showed utility in predicting mortality, severe sepsis and septic shock. Additionally, real-time ML-assisted alert systems are effective in improving time-to-antibiotic administration and ML algorithms can differentiate sepsis patients into distinct phenotypes to tailor management (especially fluid therapy and critical care interventions), potentially improving outcomes. Existing AI tools for sepsis currently lack generalizability and user acceptance. This is risk of automation bias with loss of clinicians' skills if over-reliance develops. Conclusions Overall, AI holds great promise in revolutionizing management of patients with sepsis in the ED as a clinical support tool. However, its application is currently still constrained by inherent limitations. Balanced integration of AI technology with clinician input is essential to harness its full potential and ensure optimal patient outcomes.
Collapse
Affiliation(s)
- Mui Teng Chua
- Emergency Medicine Department, National University Hospital, National University Health System, Singapore, Singapore
- Department of Surgery, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Yuru Boon
- Emergency Medicine Department, National University Hospital, National University Health System, Singapore, Singapore
- Department of Surgery, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Zi Yao Lee
- Emergency Medicine Department, National University Hospital, National University Health System, Singapore, Singapore
- Department of Surgery, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Jian Hao Jaryl Kok
- Emergency Medicine Department, National University Hospital, National University Health System, Singapore, Singapore
- Department of Surgery, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Clement Kee Woon Lim
- Emergency Medicine Department, National University Hospital, National University Health System, Singapore, Singapore
| | - Nicole Mun Teng Cheung
- Emergency Medicine Department, National University Hospital, National University Health System, Singapore, Singapore
- Department of Surgery, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Lorraine Pei Xian Yong
- Emergency Medicine Department, National University Hospital, National University Health System, Singapore, Singapore
- Department of Surgery, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Win Sen Kuan
- Emergency Medicine Department, National University Hospital, National University Health System, Singapore, Singapore
| |
Collapse
|
20
|
Kuppanda PM, Janda M, Soyer HP, Caffery LJ. What Are Patients' Perceptions and Attitudes Regarding the Use of Artificial Intelligence in Skin Cancer Screening and Diagnosis? Narrative Review. J Invest Dermatol 2025:S0022-202X(25)00080-6. [PMID: 40019459 DOI: 10.1016/j.jid.2025.01.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Revised: 01/09/2025] [Accepted: 01/15/2025] [Indexed: 03/01/2025]
Abstract
Artificial intelligence (AI) could enable early diagnosis of skin cancer; however, how AI should be implemented in clinical practice is debated. This narrative literature review (16 studies; 2012-2024) explored patient perceptions of AI in skin cancer screening and diagnosis. Patients were generally positive and perceived AI to increase diagnostic speed and accuracy. Patients preferred AI to augment a dermatologist's diagnosis rather than replace it. Patients were concerned that AI could lead to privacy breaches and clinicians deskilling and threaten doctor-patient relationships. Findings also highlight the complex nature of the impact of demographic, quality, and functional attributes on patients' attitudes toward AI.
Collapse
Affiliation(s)
- Preksha Machaiya Kuppanda
- Centre for Health Services Research, Faculty of Medicine, The University of Queensland, Brisbane, Australia.
| | - Monika Janda
- Centre for Health Services Research, Faculty of Medicine, The University of Queensland, Brisbane, Australia
| | - H Peter Soyer
- Dermatology Research Centre, Frazer Institute, The University of Queensland, Brisbane, Australia
| | - Liam J Caffery
- Centre for Health Services Research, Faculty of Medicine, The University of Queensland, Brisbane, Australia; Centre for Online Health, The University of Queensland, Brisbane, Australia
| |
Collapse
|
21
|
McLean KA, Sgrò A, Brown LR, Buijs LF, Mountain KE, Shaw CA, Drake TM, Pius R, Knight SR, Fairfield CJ, Skipworth RJE, Tsaftaris SA, Wigmore SJ, Potter MA, Bouamrane MM, Harrison EM. Multimodal machine learning to predict surgical site infection with healthcare workload impact assessment. NPJ Digit Med 2025; 8:121. [PMID: 39988586 PMCID: PMC11847912 DOI: 10.1038/s41746-024-01419-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Accepted: 12/21/2024] [Indexed: 02/25/2025] Open
Abstract
Remote monitoring is essential for healthcare digital transformation, however, this poses greater burdens on healthcare providers to review and respond as the data collected expands. This study developed a multimodal neural network to automate assessments of patient-generated data from remote postoperative wound monitoring. Two interventional studies including adult gastrointestinal surgery patients collected wound images and patient-reported outcome measures (PROMs) for 30-days postoperatively. Neural networks for PROMs and images were combined to predict surgical site infection (SSI) diagnosis within 48 h. The multimodal neural network model to predict confirmed SSI within 48 h remained comparable to clinician triage (0.762 [0.690-0.835] vs 0.777 [0.721-0.832]), with an excellent performance on external validation. Simulated usage indicated an 80% reduction in staff time (51.5 to 9.1 h) without compromising diagnostic accuracy. This multimodal approach can effectively support remote monitoring, alleviating provider burden while ensuring high-quality postoperative care.
Collapse
Affiliation(s)
- Kenneth A McLean
- Department of Clinical Surgery, University of Edinburgh, 51 Little France Crescent, Edinburgh, EH16 4SA, UK.
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, 9 Little France Rd, Edinburgh, EH16 4UX, UK.
| | - Alessandro Sgrò
- Colorectal Unit, Western General Hospital, Edinburgh, EH4 2XU, UK
| | - Leo R Brown
- Department of Clinical Surgery, University of Edinburgh, 51 Little France Crescent, Edinburgh, EH16 4SA, UK
| | - Louis F Buijs
- Colorectal Unit, Western General Hospital, Edinburgh, EH4 2XU, UK
| | - Katie E Mountain
- Department of Clinical Surgery, University of Edinburgh, 51 Little France Crescent, Edinburgh, EH16 4SA, UK
| | - Catherine A Shaw
- Department of Clinical Surgery, University of Edinburgh, 51 Little France Crescent, Edinburgh, EH16 4SA, UK
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, 9 Little France Rd, Edinburgh, EH16 4UX, UK
| | - Thomas M Drake
- Department of Clinical Surgery, University of Edinburgh, 51 Little France Crescent, Edinburgh, EH16 4SA, UK
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, 9 Little France Rd, Edinburgh, EH16 4UX, UK
| | - Riinu Pius
- Department of Clinical Surgery, University of Edinburgh, 51 Little France Crescent, Edinburgh, EH16 4SA, UK
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, 9 Little France Rd, Edinburgh, EH16 4UX, UK
| | - Stephen R Knight
- Department of Clinical Surgery, University of Edinburgh, 51 Little France Crescent, Edinburgh, EH16 4SA, UK
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, 9 Little France Rd, Edinburgh, EH16 4UX, UK
| | - Cameron J Fairfield
- Department of Clinical Surgery, University of Edinburgh, 51 Little France Crescent, Edinburgh, EH16 4SA, UK
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, 9 Little France Rd, Edinburgh, EH16 4UX, UK
| | - Richard J E Skipworth
- Department of Clinical Surgery, University of Edinburgh, 51 Little France Crescent, Edinburgh, EH16 4SA, UK
| | - Sotirios A Tsaftaris
- AI Hub for Causality in Healthcare AI with Real Data, University of Edinburgh, Edinburgh, EH9 3FG, UK
| | - Stephen J Wigmore
- Department of Clinical Surgery, University of Edinburgh, 51 Little France Crescent, Edinburgh, EH16 4SA, UK
| | - Mark A Potter
- Colorectal Unit, Western General Hospital, Edinburgh, EH4 2XU, UK
| | - Matt-Mouley Bouamrane
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, 9 Little France Rd, Edinburgh, EH16 4UX, UK
| | - Ewen M Harrison
- Department of Clinical Surgery, University of Edinburgh, 51 Little France Crescent, Edinburgh, EH16 4SA, UK.
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, 9 Little France Rd, Edinburgh, EH16 4UX, UK.
| |
Collapse
|
22
|
Chatterjee A, Riegler MA, Ganesh K, Halvorsen P. Stress management with HRV following AI, semantic ontology, genetic algorithm and tree explainer. Sci Rep 2025; 15:5755. [PMID: 39962099 PMCID: PMC11833117 DOI: 10.1038/s41598-025-87510-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 01/20/2025] [Indexed: 02/20/2025] Open
Abstract
Heart Rate Variability (HRV) serves as a vital marker of stress levels, with lower HRV indicating higher stress. It measures the variation in the time between heartbeats and offers insights into health. Artificial intelligence (AI) research aims to use HRV data for accurate stress level classification, aiding early detection and well-being approaches. This study's objective is to create a semantic model of HRV features in a knowledge graph and develop an accurate, reliable, explainable, and ethical AI model for predictive HRV analysis. The SWELL-KW dataset, containing labeled HRV data for stress conditions, is examined. Various techniques like feature selection and dimensionality reduction are explored to improve classification accuracy while minimizing bias. Different machine learning (ML) algorithms, including traditional and ensemble methods, are employed for analyzing both imbalanced and balanced HRV datasets. To address imbalances, various data formats and oversampling techniques such as SMOTE and ADASYN are experimented with. Additionally, a Tree-Explainer, specifically SHAP, is used to interpret and explain the models' classifications. The combination of genetic algorithm-based feature selection and classification using a Random Forest Classifier yields effective results for both imbalanced and balanced datasets, especially in analyzing non-linear HRV features. These optimized features play a crucial role in developing a stress management system within a Semantic framework. Introducing domain ontology enhances data representation and knowledge acquisition. The consistency and reliability of the Ontology model are assessed using Hermit reasoners, with reasoning time as a performance measure. HRV serves as a significant indicator of stress, offering insights into its correlation with mental well-being. While HRV is non-invasive, its interpretation must integrate other stress assessments for a holistic understanding of an individual's stress response. Monitoring HRV can help evaluate stress management strategies and interventions, aiding individuals in maintaining well-being.
Collapse
Affiliation(s)
- Ayan Chatterjee
- Oslo Metropolitan University (Oslomet), Oslo, Norway.
- STIFTELSEN NILU, Kjeller, Norway.
- Simula Metropolitan Center for Digital Engineering (SimulaMet), Oslo, Norway.
| | - Michael A Riegler
- Oslo Metropolitan University (Oslomet), Oslo, Norway
- Simula Metropolitan Center for Digital Engineering (SimulaMet), Oslo, Norway
| | - K Ganesh
- School of Mathematics, Indian Institute of Science Education and Research (IISER), Thiruvananthapuram, India
| | - Pål Halvorsen
- Oslo Metropolitan University (Oslomet), Oslo, Norway.
- Simula Metropolitan Center for Digital Engineering (SimulaMet), Oslo, Norway.
| |
Collapse
|
23
|
Omar M, Levkovich I. Exploring the efficacy and potential of large language models for depression: A systematic review. J Affect Disord 2025; 371:234-244. [PMID: 39581383 DOI: 10.1016/j.jad.2024.11.052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2024] [Revised: 10/21/2024] [Accepted: 11/15/2024] [Indexed: 11/26/2024]
Abstract
BACKGROUND AND OBJECTIVE Depression is a substantial public health issue, with global ramifications. While initial literature reviews explored the intersection between artificial intelligence (AI) and mental health, they have not yet critically assessed the specific contributions of Large Language Models (LLMs) in this domain. The objective of this systematic review was to examine the usefulness of LLMs in diagnosing and managing depression, as well as to investigate their incorporation into clinical practice. METHODS This review was based on a thorough search of the PubMed, Embase, Web of Science, and Scopus databases for the period January 2018 through March 2024. The search used PROSPERO and adhered to PRISMA guidelines. Original research articles, preprints, and conference papers were included, while non-English and non-research publications were excluded. Data extraction was standardized, and the risk of bias was evaluated using the ROBINS-I, QUADAS-2, and PROBAST tools. RESULTS Our review included 34 studies that focused on the application of LLMs in detecting and classifying depression through clinical data and social media texts. LLMs such as RoBERTa and BERT demonstrated high effectiveness, particularly in early detection and symptom classification. Nevertheless, the integration of LLMs into clinical practice is in its nascent stage, with ongoing concerns about data privacy and ethical implications. CONCLUSION LLMs exhibit significant potential for transforming strategies for diagnosing and treating depression. Nonetheless, full integration of LLMs into clinical practice requires rigorous testing, ethical considerations, and enhanced privacy measures to ensure their safe and effective use.
Collapse
Affiliation(s)
- Mahmud Omar
- Tel-Aviv University, Faculty of Medicine, Israel.
| | | |
Collapse
|
24
|
Werder K, Cao L, Park EH, Ramesh B. Why AI Monitoring Faces Resistance and What Healthcare Organizations Can Do About It: An Emotion-Based Perspective. J Med Internet Res 2025; 27:e51785. [PMID: 39889282 PMCID: PMC11829173 DOI: 10.2196/51785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 08/20/2024] [Accepted: 10/07/2024] [Indexed: 02/02/2025] Open
Abstract
Continuous monitoring of patients' health facilitated by artificial intelligence (AI) has enhanced the quality of health care, that is, the ability to access effective care. However, AI monitoring often encounters resistance to adoption by decision makers. Healthcare organizations frequently assume that the resistance stems from patients' rational evaluation of the technology's costs and benefits. Recent research challenges this assumption and suggests that the resistance to AI monitoring is influenced by the emotional experiences of patients and their surrogate decision makers. We develop a framework from an emotional perspective, provide important implications for healthcare organizations, and offer recommendations to help reduce resistance to AI monitoring.
Collapse
Affiliation(s)
- Karl Werder
- Digital Business Innovation, IT University of Copenhagen, Copenhagen, Denmark
| | - Lan Cao
- Information Technology & Decision Sciences, Strome College of Business, Old Dominion University, Norfolk, VA, United States
| | - Eun Hee Park
- Information Technology & Decision Sciences, Strome College of Business, Old Dominion University, Norfolk, VA, United States
| | - Balasubramaniam Ramesh
- Computer Information Systems, J. Mack Robinson College of Business, Georgia State University, Atlanta, GA, United States
| |
Collapse
|
25
|
Smoła P, Młoźniak I, Wojcieszko M, Zwierczyk U, Kobryn M, Rzepecka E, Duplaga M. Attitudes toward artificial intelligence and robots in healthcare in the general population: a qualitative study. Front Digit Health 2025; 7:1458685. [PMID: 39931116 PMCID: PMC11808042 DOI: 10.3389/fdgth.2025.1458685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Accepted: 01/06/2025] [Indexed: 02/13/2025] Open
Abstract
Background The growth of the use of artificial intelligence (AI) and robotic solutions in healthcare is accompanied by high expectations for improved efficiency and quality of services. However, the use of such technologies can be a source of anxiety for patients whose expectations and experiences with such technology differ from medical staff's. This study assessed attitudes toward AI and robots in delivering health services and performing various tasks in medicine and related fields in Polish society. Methods 50 semistructured in-depth interviews were conducted with participants of diversified socio-demographic profiles. The interviewees were initially recruited for the interviews in a convenience sample; then, the process was continued using the snowballing technique. The interviews were transcribed and analyzed using the MAXQDA Analytics Pro 2022 program (release 22.7.0). An interpretative approach to qualitative content analysis was applied to the responses to the research questions. Results The analysis of interviews yielded three main themes: positive and negative perceptions of the use of AI and robots in healthcare and ontological concerns about AI, which went beyond objections about the usefulness of the technology. Positive attitudes toward AI and robots were associated with overall higher trust in technology, the need to adequately respond to demographic challenges, and the conviction that AI and robots can lower the workload of medical personnel. Negative attitudes originated from convictions regarding unreliability and the lack of proper technological and political control over AI; an equally important topic was the inability of artificial entities to feel and express emotions. The third theme was that the potential interaction with machines equipped with human-like traits was a source of insecurity. Conclusions The study showed that patients' attitudes toward AI and robots in healthcare vary according to their trust in technology, their recognition of urgent problems in healthcare (staff workload, time of diagnosis), and their beliefs regarding the reliability and functioning of new technologies. Emotional concerns about contact with artificial entities looking or performing like humans are also important to respondents' attitudes.
Collapse
Affiliation(s)
- Paulina Smoła
- Department of Health Promotion and e-Health, Faculty of Health Sciences, Institute of Public Health, Jagiellonian University Medical College, Krakow, Poland
| | - Iwona Młoźniak
- Department of Health Promotion and e-Health, Faculty of Health Sciences, Institute of Public Health, Jagiellonian University Medical College, Krakow, Poland
| | - Monika Wojcieszko
- Department of Health Promotion and e-Health, Faculty of Health Sciences, Institute of Public Health, Jagiellonian University Medical College, Krakow, Poland
| | - Urszula Zwierczyk
- Department of Health Promotion and e-Health, Faculty of Health Sciences, Institute of Public Health, Jagiellonian University Medical College, Krakow, Poland
| | - Mateusz Kobryn
- Department of Health Promotion and e-Health, Faculty of Health Sciences, Institute of Public Health, Jagiellonian University Medical College, Krakow, Poland
| | - Elżbieta Rzepecka
- Department of Epidemiology and Population Studies, Faculty of Health Sciences, Institute of Public Health, Jagiellonian University Medical College, Krakow, Poland
| | - Mariusz Duplaga
- Department of Health Promotion and e-Health, Faculty of Health Sciences, Institute of Public Health, Jagiellonian University Medical College, Krakow, Poland
| |
Collapse
|
26
|
Singh A, Schooley B, Mobley J, Mobley P, Lindros S, Brooks JM, Floyd SB. Human-centered design of a health recommender system for orthopaedic shoulder treatment. BMC Med Inform Decis Mak 2025; 25:17. [PMID: 39794787 PMCID: PMC11720343 DOI: 10.1186/s12911-025-02850-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Accepted: 01/02/2025] [Indexed: 01/13/2025] Open
Abstract
BACKGROUND Rich data on diverse patients and their treatments and outcomes within Electronic Health Record (EHR) systems can be used to generate real world evidence. A health recommender system (HRS) framework can be applied to a decision support system application to generate data summaries for similar patients during the clinical encounter to assist physicians and patients in making evidence-based shared treatment decisions. OBJECTIVE A human-centered design (HCD) process was used to develop a HRS for treatment decision support in orthopaedic medicine, the Informatics Consult for Individualized Treatment (I-C-IT). We also evaluate the usability and utility of the system from the physician's perspective, focusing on elements of utility and shared decision-making in orthopaedic medicine. METHODS The HCD process for I-C-IT included 6 steps across three phases of analysis, design, and evaluation. A team of health informatics and comparative effectiveness researchers directly engaged with orthopaedic surgeon subject matter experts in a collaborative I-C-IT prototype design process. Ten orthopaedic surgeons participated in a mixed methods evaluation of the I-C-IT prototype that was produced. RESULTS The HCD process resulted in a prototype system, I-C-IT, with 14 data visualization elements and a set of design principles crucial for HRS for decision support. The overall standard system usability scale (SUS) score for the I-C-IT Webapp prototype was 88.75 indicating high usability. In addition, utility questions addressing shared decision-making found that 90% of orthopaedic surgeon respondents either strongly agreed or agreed that I-C-IT would help them make data informed decisions with their patients. CONCLUSION The HCD process produced an HRS prototype that is capable of supporting orthopaedic surgeons and patients in their information needs during clinical encounters. Future research should focus on refining I-C-IT by incorporating patient feedback in future iterative cycles of system design and evaluation.
Collapse
Affiliation(s)
- Akanksha Singh
- Department of Health Services Administration, School of Health Professions, University of Alabama at Birmingham, Birmingham, AL, USA
- Center for Effectiveness Research in Orthopaedics, Greenville, SC, USA
| | - Benjamin Schooley
- Department of Electrical and Computer Engineering, Ira A. Fulton College of Engineering, Brigham Young University, Provo, UT, USA
- Center for Effectiveness Research in Orthopaedics, Greenville, SC, USA
| | - John Mobley
- University of South Carolina School of Medicine Greenville, Greenville, SC, USA
- Center for Effectiveness Research in Orthopaedics, Greenville, SC, USA
| | - Patrick Mobley
- Department of Exercise Science, University of South Carolina, Arnold School of Public Health, Columbia, SC, USA
- Center for Effectiveness Research in Orthopaedics, Greenville, SC, USA
| | - Sydney Lindros
- Department of Public Health Sciences, Clemson University, Clemson, SC, USA
| | - John M Brooks
- Department of Health Services Policy and Management, University of South Carolina, Arnold School of Public Health, Columbia, SC, USA
- Center for Effectiveness Research in Orthopaedics, Greenville, SC, USA
| | - Sarah B Floyd
- Department of Public Health Sciences, Clemson University, Clemson, SC, USA.
- Center for Effectiveness Research in Orthopaedics, Greenville, SC, USA.
| |
Collapse
|
27
|
Schrimpff C, Link E, Fisse T, Baumann E, Klimmt C. Mental Models of Smart Implant Technology: A Topic Modeling Approach to the Role of Initial Information and Labeling. HEALTH COMMUNICATION 2025:1-13. [PMID: 39780460 DOI: 10.1080/10410236.2024.2447548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2025]
Abstract
Public understanding of medical innovations such as smart technology is decisive for its acceptance and implementation. Thus, it is important to understand what visions people develop of a technology based on initial information such as the label. We chose smart implants as an example and conducted qualitative interviews with 47 former implant patients to record their mental models after exposing them to the idea of smart implants through a vignette. Their answers were analyzed using LDA topic modeling. We derived five topics describing people's mental models considering the technology's functionalities, (dis)advantages as well as potential benefits and risks. The topics revealed that our respondents have often associated the idea of smart implants with artificial intelligence, which is a misconception regarding the introduced conceptualization. Thus, special attention has to be paid to the technology's labeling in communication efforts to ensure adequate understanding.
Collapse
Affiliation(s)
- Charlotte Schrimpff
- Department of Journalism and Communication Research, Hanover University of Music, Drama and Media
| | - Elena Link
- Department of Communication, Johannes Gutenberg University Mainz
| | - Tanja Fisse
- Department of Journalism and Communication Research, Hanover University of Music, Drama and Media
| | - Eva Baumann
- Department of Journalism and Communication Research, Hanover University of Music, Drama and Media
| | - Christoph Klimmt
- Department of Journalism and Communication Research, Hanover University of Music, Drama and Media
| |
Collapse
|
28
|
Shin D, Park H, Shaffrey I, Yacoubian V, Taka TM, Dye J, Danisa O. Artificial intelligence versus clinical judgement: how accurately do generative models reflect CNS guidelines for chiari malformation? Clin Neurol Neurosurg 2025; 248:108662. [PMID: 39612523 DOI: 10.1016/j.clineuro.2024.108662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2024] [Revised: 11/21/2024] [Accepted: 11/23/2024] [Indexed: 12/01/2024]
Abstract
OBJECTIVE This study investigated the response and readability of generative artificial intelligence (AI) models to questions and recommendations proposed by the 2023 Congress of Neurological Surgeons (CNS) guidelines for Chiari 1 malformation. METHODS Thirteen questions were generated from CNS guidelines and asked to Perplexity, ChatGPT 4o, Microsoft Copilot, and Google Gemini. AI answers were divided into two categories, "concordant" and "non-concordant," according to their alignment with current CNS guidelines. Non-concordant answers were sub-categorized as "insufficient" or "over-conclusive." Responses were evaluated for readability via the Flesch-Kincaid Grade Level, Gunning Fog Index, SMOG (Simple Measure of Gobbledygook) Index, and Flesch Reading Ease test. RESULTS Perplexity displayed the highest concordance rate of 69.2 %, with non-concordant responses classified as 0 % insufficient and 30.8 % over-conclusive. ChatGPT 4o had the lowest concordance rate at 23.1 %, with 0 % insufficient and 76.9 % over-conclusive classifications. Copilot showed a 61.5 % concordance rate, with 7.7 % insufficient and 30.8 % over-conclusive. Gemini demonstrated a 30.8 % concordance rate, with 7.7 % insufficient and 61.5 % as over-conclusive. Flesch-Kincaid Grade Level scores ranged from 14.48 (Gemini) to 16.48 (Copilot), Gunning Fog Index scores varied between 16.18 (Gemini) and 18.8 (Copilot), SMOG Index scores ranged from 16 (Gemini) to 17.54 (Copilot), and Flesch Reading Ease scores were low across all models, with Gemini showing the highest mean score of 21.3. CONCLUSION Perplexity and Copilot emerged as the best-performing for concordance, while ChatGPT and Gemini displayed the highest over-conclusive rates. All responses showcased high complexity and difficult readability. While AI can be valuable in certain aspects of clinical practice, the low concordance rates show that AI should not replace clinician judgement.
Collapse
Affiliation(s)
- David Shin
- School of Medicine, Loma Linda University, Loma Linda, CA, USA
| | - Hyunah Park
- School of Medicine, Loma Linda University, Loma Linda, CA, USA
| | | | - Vahe Yacoubian
- Department of Orthopaedic Surgery, Loma Linda University Medical Center, Loma Linda, CA, USA
| | - Taha M Taka
- Department of Orthopaedic Surgery, Loma Linda University Medical Center, Loma Linda, CA, USA
| | - Justin Dye
- Department of Neurological Surgery, Loma Linda University Medical Center, Loma Linda, CA, USA
| | - Olumide Danisa
- Departments of Orthopaedic Surgery and Neurological Surgery, Duke University Health System, Durham, NC, USA.
| |
Collapse
|
29
|
Mwogosi A. Ethical and privacy challenges of integrating generative AI into EHR systems in Tanzania: A scoping review with a policy perspective. Digit Health 2025; 11:20552076251344385. [PMID: 40400763 PMCID: PMC12093014 DOI: 10.1177/20552076251344385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2025] [Accepted: 05/07/2025] [Indexed: 05/23/2025] Open
Abstract
Objectives This study examines the ethical and privacy challenges of integrating generative artificial intelligence (AI) into electronic health record (EHR) systems, focusing on Tanzania's healthcare context. It critically analyses the extent to which Tanzania's Policy Framework for Artificial Intelligence in the Health Sector (2022) addresses these challenges and proposes regulatory and practical safeguards for responsible generative AI deployment. Methods A systematic scoping review was conducted using PubMed, IEEE Xplore, Scopus and Google Scholar to identify relevant studies published between 2014 and 2024. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) guidelines informed the search and selection process. Fourteen studies met the inclusion criteria and were thematically analysed to identify key ethical and privacy concerns of generative AI in healthcare. Moreover, a policy analysis of Tanzania's AI framework was conducted to assess its alignment with global best practices and regulatory preparedness. Results The review identified six key ethical and privacy challenges associated with generative AI in EHR systems: data privacy and security risks, algorithmic bias and fairness concerns, transparency and accountability issues, consent and autonomy challenges, human oversight gaps and risks of data re-identification. The policy analysis revealed that while Tanzania's AI framework aligns with national health priorities and promotes capacity building and ethical governance, it lacks generative AI-specific guidelines, regulatory clarity and resource mobilisation strategies necessary for healthcare settings. Conclusion Integrating generative AI into Tanzania's EHR systems presents transformative opportunities and significant ethical and privacy risks. Tanzania's policy framework should incorporate AI-specific ethical guidelines, operationalise regulatory mechanisms, foster stakeholder engagement through participatory co-design and strengthen infrastructural investments. These measures will promote ethical integrity, enhance patient trust and position Tanzania as a regional leader in responsible AI use in healthcare.
Collapse
Affiliation(s)
- Augustino Mwogosi
- Department of Information Systems and Technology, University of Dodoma, Dodoma, Tanzania
| |
Collapse
|
30
|
Pagano S, Strumolo L, Michalk K, Schiegl J, Pulido LC, Reinhard J, Maderbacher G, Renkawitz T, Schuster M. Evaluating ChatGPT, Gemini and other Large Language Models (LLMs) in orthopaedic diagnostics: A prospective clinical study. Comput Struct Biotechnol J 2024; 28:9-15. [PMID: 39850460 PMCID: PMC11754967 DOI: 10.1016/j.csbj.2024.12.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2024] [Revised: 12/14/2024] [Accepted: 12/18/2024] [Indexed: 01/25/2025] Open
Abstract
Background Large Language Models (LLMs) such as ChatGPT are gaining attention for their potential applications in healthcare. This study aimed to evaluate the diagnostic sensitivity of various LLMs in detecting hip or knee osteoarthritis (OA) using only patient-reported data collected via a structured questionnaire, without prior medical consultation. Methods A prospective observational study was conducted at an orthopaedic outpatient clinic specialized in hip and knee OA treatment. A total of 115 patients completed a paper-based questionnaire covering symptoms, medical history, and demographic information. The diagnostic performance of five different LLMs-including four versions of ChatGPT, two of Gemini, Llama, Gemma 2, and Mistral-Nemo-was analysed. Model-generated diagnoses were compared against those provided by experienced orthopaedic clinicians, which served as the reference standard. Results GPT-4o achieved the highest diagnostic sensitivity at 92.3 %, significantly outperforming other LLMs. The completeness of patient responses to symptom-related questions was the strongest predictor of accuracy for GPT-4o (p < 0.001). Inter-model agreement was moderate among GPT-4 versions, whereas models such as Llama-3.1 demonstrated notably lower accuracy and concordance. Conclusions GPT-4o demonstrated high accuracy and consistency in diagnosing OA based solely on patient-reported questionnaires, underscoring its potential as a supplementary diagnostic tool in clinical settings. Nevertheless, the reliance on patient-reported data without direct physician involvement highlights the critical need for medical oversight to ensure diagnostic accuracy. Further research is needed to refine LLM capabilities and expand their utility in broader diagnostic applications.
Collapse
Affiliation(s)
- Stefano Pagano
- Department of Orthopaedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Luigi Strumolo
- Freelance health consultant & senior data analyst, Avellino, Italy
| | - Katrin Michalk
- Department of Orthopaedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Julia Schiegl
- Department of Orthopaedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Loreto C. Pulido
- Department of Orthopaedics Hospital of Trauma Surgery, Marktredwitz Hospital, Marktredwitz, Germany
| | - Jan Reinhard
- Department of Orthopaedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Guenther Maderbacher
- Department of Orthopaedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Tobias Renkawitz
- Department of Orthopaedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Marie Schuster
- Department of Orthopaedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| |
Collapse
|
31
|
Williams M, Karim W, Gelman J, Raza M. Ethical data acquisition for LLMs and AI algorithms in healthcare. NPJ Digit Med 2024; 7:377. [PMID: 39715803 DOI: 10.1038/s41746-024-01399-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Accepted: 12/16/2024] [Indexed: 12/25/2024] Open
Abstract
Artificial intelligence (AI) algorithms will become increasingly integrated into our healthcare systems in the coming decades. These algorithms require large volumes of data for development and fine-tuning. Patient data is typically acquired for AI algorithms through an opt-out system in the United States, while others support an opt-in model. We argue that ethical principles around autonomy, patient ownership of data, and privacy should be prioritized in the data acquisition paradigm.
Collapse
|
32
|
Appel JM. Artificial intelligence in medicine and the negative outcome penalty paradox. JOURNAL OF MEDICAL ETHICS 2024; 51:34-36. [PMID: 38290853 DOI: 10.1136/jme-2023-109848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 01/18/2024] [Indexed: 02/01/2024]
Abstract
Artificial intelligence (AI) holds considerable promise for transforming clinical diagnostics. While much has been written both about public attitudes toward the use of AI tools in medicine and about uncertainty regarding legal liability that may be delaying its adoption, the interface of these two issues has so far drawn less attention. However, understanding this interface is essential to determining how jury behaviour is likely to influence adoption of AI by physicians. One distinctive concern identified in this paper is a 'negative outcome penalty paradox' (NOPP) in which physicians risk being penalised by juries in cases with negative outcomes, whether they overrule AI determinations or accept them. The paper notes three reasons why AI in medicine is uniquely susceptible to the NOPP and urges serious further consideration of this complex dilemma.
Collapse
Affiliation(s)
- Jacob M Appel
- Psychiatry, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| |
Collapse
|
33
|
Grosek Š, Štivić S, Borovečki A, Ćurković M, Lajovic J, Marušić A, Mijatović A, Miksić M, Mimica S, Škrlep E, Lah Tomulić K, Erčulj V. Ethical attitudes and perspectives of AI use in medicine between Croatian and Slovenian faculty members of school of medicine: Cross-sectional study. PLoS One 2024; 19:e0310599. [PMID: 39637041 PMCID: PMC11620630 DOI: 10.1371/journal.pone.0310599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Accepted: 09/03/2024] [Indexed: 12/07/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) is present in preclinical, clinical and research work, in various branches of medicine. Researchers and teachers at school of medicines may have different ethical attitudes and perspectives about the implementation of AI systems in medicine. METHODS We conducted an online survey among researchers and teachers (RTs) at the departments and institutes of two Slovenian and four Croatian Schools of Medicine. RESULTS The sample included 165 and 214 researchers and teachers in Slovenia and Croatia, respectively. The sample of respondents in Slovenia and Croatia was comparable in demographical characteristics. All participants reported high emphasis on the bioethical principles when using artificial intelligence in medicine, its usefulness in certain circumstances, but also caution regarding companies providing AI systems and tools. Slovenian and Croatian researchers and teachers shared three similar perspectives on the use of AI in medicine-complying with highest ethical principles, explainability and transparency and usefulness of AI tools. Higher caution towards use of AI in medicine and effect on autonomy of physicians was expressed in Croatia, while in Slovenia high emphasis was put on understanding how AI works, but also the concerns regarding willingness and time of physicians to learn about AI. CONCLUSION Slovenian and Croatian researchers and teachers share ethical attitudes and perspectives with international researchers and physicians. It is important to facilitate understanding of the implications of AI use in medicine and set a solid evidence-based ground to tackle ethical and legal issues.
Collapse
Affiliation(s)
- Štefan Grosek
- Neonatology Section, Department of Perinatology, Division of Gynaecology and Obstetrics, University Medical Centre, Ljubljana, Ljubljana, Slovenia
| | - Stjepan Štivić
- Institute of Bioethics, Faculty of Theology, University of Ljubljana, Ljubljana Slovenia
| | - Ana Borovečki
- School of Medicine, ‘A, Štampar’ School of Public Health, University of Zagreb, Zagreb, Croatia
| | | | - Jaro Lajovic
- Rho Sigma Research & Consulting, Ljubljana, Slovenia
| | - Ana Marušić
- Center for Evidence-based Medicine, Department of Research in Biomedicine and Health, School of Medicine, University of Split, Split, Croatia
| | - Antonija Mijatović
- Center for Evidence-based Medicine, Department of Research in Biomedicine and Health, School of Medicine, University of Split, Split, Croatia
| | - Mirjana Miksić
- University Medical Centre Maribor, Clinic for Gynecology and Perinatology, Maribor, Slovenia
| | | | - Eva Škrlep
- Faculty of Medicine, University of Ljubljana, Ljubljana, Slovenia
| | - Kristina Lah Tomulić
- Department of Pediatrics, Faculty of Medicine, University of Rijeka, Croatia Pediatric Intensive Care Unit, Department of Pediatrics, Clinical Hospital Centre Rijeka, Rijeka, Croatia
| | - Vanja Erčulj
- Faculty of Criminal Justice and Security, University of Maribor, Ljubljana, Slovenia
| |
Collapse
|
34
|
Ozcan SGG, Erkan M. Reliability and quality of information provided by artificial intelligence chatbots on post-contrast acute kidney injury: an evaluation of diagnostic, preventive, and treatment guidance. REVISTA DA ASSOCIACAO MEDICA BRASILEIRA (1992) 2024; 70:e20240891. [PMID: 39630765 PMCID: PMC11639515 DOI: 10.1590/1806-9282.20240891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Accepted: 08/18/2024] [Indexed: 12/07/2024]
Abstract
OBJECTIVE The aim of this study was to evaluate the reliability and quality of information provided by artificial intelligence chatbots regarding the diagnosis, preventive methods, and treatment of contrast-associated acute kidney injury, while also discussing their benefits and drawbacks. METHODS The most frequently asked questions regarding contrast-associated acute kidney injury on Google Trends between January 2022 and January 2024 were posed to four artificial intelligence chatbots: ChatGPT, Gemini, Copilot, and Perplexity. The responses were evaluated based on the DISCERN score, the Patient Education Materials Assessment Tool for Printable Materials score, the Web Resource Rating scale, the Coleman-Liau index, and a Likert scale. RESULTS As per the DISCERN score, the quality of information provided by Perplexity received a rating of "good", while the quality of information acquired by ChatGPT, Gemini, and Copilot was scored as "average." Based on the Coleman-Liau index, the readability of the responses was greater than 11 for all artificial intelligence chatbots, suggesting a high level of complexity requiring a university-level education. Similarly, the understandability and applicability scores on the Patient Education Materials Assessment Tool for Printable Materials and the Web Resource Rating scale were low for all artificial intelligence programs. In consideration of the Likert score, all artificial intelligence chatbots received favorable ratings. CONCLUSIONS While patients increasingly utilize artificial intelligence chatbots to acquire information about contrast-associated acute kidney injury, the readability and understandability of the information provided may be low.
Collapse
Affiliation(s)
- Seray Gizem Gur Ozcan
- Bursa Yuksek Ihtisas Education and Research Hospital, Department of Radiology – Bursa, Türkiye
| | - Merve Erkan
- Bursa City Hospital, Department of Radiology – Bursa, Türkiye
| |
Collapse
|
35
|
Xuereb F, Portelli DJL. The knowledge and perception of patients in Malta towards artificial intelligence in medical imaging. J Med Imaging Radiat Sci 2024; 55:101743. [PMID: 39317135 DOI: 10.1016/j.jmir.2024.101743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 07/23/2024] [Accepted: 07/31/2024] [Indexed: 09/26/2024]
Abstract
INTRODUCTION Artificial intelligence (AI) is becoming increasingly implemented in radiology, especially in image reporting. Patients' perceptions about AI integration in medical imaging is a relatively unexplored area that has received limited investigation in the literature. This study aimed to determine current knowledge and perceptions of patients in Malta towards AI application in medical imaging. METHODS A cross-sectional study using a self-designed paper-based questionnaire, partly adapted with permission from two previous studies, was distributed in English or Maltese language amongst eligible outpatients attending medical imaging examinations across public hospitals in Malta and Gozo in March 2023. RESULTS 280 questionnaires were analysed, resulting in a 5.83 % confidence interval. 42.1 % of patients indicated basic AI knowledge, while 36.4 % reported minimal to no knowledge. Responses indicated favourable opinions towards the collaborative integration of humans and AI to improve healthcare. However, participants expressed preference for doctors to retain final-decision making when AI is used. For some statements, a statistically significant association was observed between patients' perception of AI-based technology and their gender, age, and educational background. Essentially, 92.1 % expressed the importance of being informed whenever AI is to be utilised in their care. DISCUSSION As key stakeholders, patients should be actively involved when AI technology is used. Informing patients about the use of AI in medical imaging is important to cultivate trust, address ethical concerns, and help ensure that AI integration in healthcare systems aligns with patients' values and needs. CONCLUSION This study highlights the need to enhance AI literacy amongst patients, possibly though awareness campaigns or educational programmes. Additionally, clear policies relating to the use of AI in medical imaging and how such AI use is communicated to patients are necessary.
Collapse
Affiliation(s)
- Francesca Xuereb
- Department of Radiography, Faculty of Health Sciences, University of Malta, Msida, Malta.
| | - Dr Jonathan L Portelli
- Department of Radiography, Faculty of Health Sciences, University of Malta, Msida, Malta.
| |
Collapse
|
36
|
Reading Turchioe M, Desai P, Harkins S, Kim J, Kumar S, Zhang Y, Joly R, Pathak J, Hermann A, Benda N. Differing perspectives on artificial intelligence in mental healthcare among patients: a cross-sectional survey study. Front Digit Health 2024; 6:1410758. [PMID: 39679142 PMCID: PMC11638230 DOI: 10.3389/fdgth.2024.1410758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 10/14/2024] [Indexed: 12/17/2024] Open
Abstract
Introduction Artificial intelligence (AI) is being developed for mental healthcare, but patients' perspectives on its use are unknown. This study examined differences in attitudes towards AI being used in mental healthcare by history of mental illness, current mental health status, demographic characteristics, and social determinants of health. Methods We conducted a cross-sectional survey of an online sample of 500 adults asking about general perspectives, comfort with AI, specific concerns, explainability and transparency, responsibility and trust, and the importance of relevant bioethical constructs. Results Multiple vulnerable subgroups perceive potential harms related to AI being used in mental healthcare, place importance on upholding bioethical constructs, and would blame or reduce trust in multiple parties, including mental healthcare professionals, if harm or conflicting assessments resulted from AI. Discussion Future research examining strategies for ethical AI implementation and supporting clinician AI literacy is critical for optimal patient and clinician interactions with AI in mental healthcare.
Collapse
Affiliation(s)
| | - Pooja Desai
- Department of Biomedical Informatics, Columbia University, New York, NY, United States
| | - Sarah Harkins
- Columbia University School of Nursing, New York, NY, United States
| | - Jessica Kim
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Shiveen Kumar
- College of Agriculture and Life Sciences, Cornell University, Ithaca, NY, United States
| | - Yiye Zhang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Rochelle Joly
- Department of Obstetrics and Gynecology, Weill Cornell Medicine, New York, NY, United States
| | - Jyotishman Pathak
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Alison Hermann
- Department of Psychiatry, Weill Cornell Medicine, New York, NY, United States
| | - Natalie Benda
- Columbia University School of Nursing, New York, NY, United States
| |
Collapse
|
37
|
Li S, Chen M, Liu PL, Xu J. Following Medical Advice of an AI or a Human Doctor? Experimental Evidence Based on Clinician-Patient Communication Pathway Model. HEALTH COMMUNICATION 2024:1-13. [PMID: 39494686 DOI: 10.1080/10410236.2024.2423114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2024]
Abstract
Medical large language models are being introduced to the public in collaboration with governments, medical institutions, and artificial intelligence (AI) researchers. However, a crucial question remains: Will patients follow the medical advice provided by AI doctors? The lack of user research makes it difficult to provide definitive answers. Based on the clinician-patient communication pathway model, this study conducted a factorial experiment with a 2 (medical provider, AI vs. human) × 2 (information support, low vs. high) × 2 (response latency, slow vs. fast) between-subjects design (n = 535). The results showed that participants exhibited significantly lower adherence to AI doctors' advice than to human doctors. In addition, the interaction effect suggested that, under the slow-response latency condition, subjects perceived greater health benefits and patient-centeredness from human doctors, while the opposite was observed for AI doctors.
Collapse
Affiliation(s)
- Shuoshuo Li
- School of Media and Communication, Shanghai Jiao Tong University
| | - Meng Chen
- School of Media and Communication, Shanghai Jiao Tong University
| | | | - Jian Xu
- School of Media and Communication, Shanghai Jiao Tong University
| |
Collapse
|
38
|
Yammouri G, Ait Lahcen A. AI-Reinforced Wearable Sensors and Intelligent Point-of-Care Tests. J Pers Med 2024; 14:1088. [PMID: 39590580 PMCID: PMC11595538 DOI: 10.3390/jpm14111088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2024] [Revised: 10/25/2024] [Accepted: 10/28/2024] [Indexed: 11/28/2024] Open
Abstract
Artificial intelligence (AI) techniques offer great potential to advance point-of-care testing (POCT) and wearable sensors for personalized medicine applications. This review explores the recent advances and the transformative potential of the use of AI in improving wearables and POCT. The integration of AI significantly contributes to empowering these tools and enables continuous monitoring, real-time analysis, and rapid diagnostics, thus enhancing patient outcomes and healthcare efficiency. Wearable sensors powered by AI models offer tremendous opportunities for precise and non-invasive tracking of physiological conditions that are essential for early disease detection and personalized treatments. AI-empowered POCT facilitates rapid, accurate diagnostics, making these medical testing kits accessible and available even in resource-limited settings. This review discusses the key advances in AI applications for data processing, sensor fusion, and multivariate analytics, highlighting case examples that exhibit their impact in different medical scenarios. In addition, the challenges associated with data privacy, regulatory approvals, and technology integrations into the existing healthcare system have been overviewed. The outlook emphasizes the urgent need for continued innovation in AI-driven health technologies to overcome these challenges and to fully achieve the potential of these techniques to revolutionize personalized medicine.
Collapse
Affiliation(s)
- Ghita Yammouri
- Chemical Analysis & Biosensors, Process Engineering and Environment Laboratory, Faculty of Science and Techniques, Hassan II University of Casablanca, Mohammedia 28806, Morocco;
| | | |
Collapse
|
39
|
Jin H, Lin Q, Lu J, Hu C, Lu B, Jiang N, Wu S, Li X. Evaluating the Effectiveness of a Generative Pretrained Transformer-Based Dietary Recommendation System in Managing Potassium Intake for Hemodialysis Patients. J Ren Nutr 2024; 34:539-545. [PMID: 38615701 DOI: 10.1053/j.jrn.2024.04.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 03/31/2024] [Accepted: 04/03/2024] [Indexed: 04/16/2024] Open
Abstract
OBJECTIVE Despite adequate dialysis, the prevalence of hyperkalemia in Chinese hemodialysis (HD) patients remains elevated. This study aims to evaluate the effectiveness of a dietary recommendation system driven by generative pretrained transformers (GPTs) in managing potassium levels in HD patients. METHODS We implemented a bespoke dietary guidance tool utilizing GPT technology. Patients undergoing HD at our center were enrolled in the study from October 2023 to November 2023. The intervention comprised of two distinct phases. Initially, patients were provided with conventional dietary education focused on potassium management in HD. Subsequently, in the second phase, they were introduced to a novel GPT-based dietary guidance tool. This artificial intelligence (AI)-powered tool offered real-time insights into the potassium content of various foods and personalized dietary suggestions. The effectiveness of the AI tool was evaluated by assessing the precision of its dietary recommendations. Additionally, we compared predialysis serum potassium levels and the proportion of patients with hyperkalemia among patients before and after the implementation of the GPT-based dietary guidance system. RESULTS In our analysis of 324 food photographs uploaded by 88 HD patients, the GPTs system evaluated potassium content with an overall accuracy of 65%. Notably, the accuracy was higher for high-potassium foods at 85%, while it stood at 48% for low-potassium foods. Furthermore, the study examined the effect of GPT-based dietary advice on patients' serum potassium levels, revealing a significant reduction in those adhering to GPTs recommendations compared to recipients of traditional dietary guidance (4.57 ± 0.76 mmol/L vs. 4.84 ± 0.94 mmol/L, P = .004). Importantly, compared to traditional dietary education, dietary education based on the GPTs tool reduced the proportion of hyperkalemia in HD patients from 39.8% to 25% (P = .036). CONCLUSION These results underscore the promising role of AI in improving dietary management for HD patients. Nonetheless, the study also points out the need for enhanced accuracy in identifying low potassium foods. It paves the way for future research, suggesting the incorporation of extensive nutritional databases and the assessment of long-term outcomes. This could potentially lead to more refined and effective dietary management strategies in HD care.
Collapse
Affiliation(s)
- Haijiao Jin
- Department of Nephrology, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Department of Nephrology, Ningbo Hangzhou Bay Hospital, China; Molecular Cell Lab for Kidney Disease, Shanghai, China; Shanghai Peritoneal Dialysis Research Center, Shanghai, China; Uremia Diagnosis and Treatment Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qisheng Lin
- Department of Nephrology, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Molecular Cell Lab for Kidney Disease, Shanghai, China; Shanghai Peritoneal Dialysis Research Center, Shanghai, China; Uremia Diagnosis and Treatment Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jifang Lu
- Department of Nephrology, Ningbo Hangzhou Bay Hospital, China
| | - Cuirong Hu
- Department of Nephrology, Ningbo Hangzhou Bay Hospital, China
| | - Bohan Lu
- Department of Nephrology, Ningbo Hangzhou Bay Hospital, China
| | - Na Jiang
- Department of Nephrology, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Department of Nephrology, Ningbo Hangzhou Bay Hospital, China; Molecular Cell Lab for Kidney Disease, Shanghai, China; Shanghai Peritoneal Dialysis Research Center, Shanghai, China; Uremia Diagnosis and Treatment Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shaun Wu
- WORK Medical Technology Group LTD, Hangzhou, China
| | - Xiaoyang Li
- Department of Medical Education, Ruijin Hospital Affifiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| |
Collapse
|
40
|
Botha NN, Segbedzi CE, Dumahasi VK, Maneen S, Kodom RV, Tsedze IS, Akoto LA, Atsu FS, Lasim OU, Ansah EW. Artificial intelligence in healthcare: a scoping review of perceived threats to patient rights and safety. Arch Public Health 2024; 82:188. [PMID: 39444019 PMCID: PMC11515716 DOI: 10.1186/s13690-024-01414-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Accepted: 10/01/2024] [Indexed: 10/25/2024] Open
Abstract
BACKGROUND The global health system remains determined to leverage on every workable opportunity, including artificial intelligence (AI) to provide care that is consistent with patients' needs. Unfortunately, while AI models generally return high accuracy within the trials in which they are trained, their ability to predict and recommend the best course of care for prospective patients is left to chance. PURPOSE This review maps evidence between January 1, 2010 to December 31, 2023, on the perceived threats posed by the usage of AI tools in healthcare on patients' rights and safety. METHODS We deployed the guidelines of Tricco et al. to conduct a comprehensive search of current literature from Nature, PubMed, Scopus, ScienceDirect, Dimensions AI, Web of Science, Ebsco Host, ProQuest, JStore, Semantic Scholar, Taylor & Francis, Emeralds, World Health Organisation, and Google Scholar. In all, 80 peer reviewed articles qualified and were included in this study. RESULTS We report that there is a real chance of unpredictable errors, inadequate policy and regulatory regime in the use of AI technologies in healthcare. Moreover, medical paternalism, increased healthcare cost and disparities in insurance coverage, data security and privacy concerns, and bias and discriminatory services are imminent in the use of AI tools in healthcare. CONCLUSIONS Our findings have some critical implications for achieving the Sustainable Development Goals (SDGs) 3.8, 11.7, and 16. We recommend that national governments should lead in the roll-out of AI tools in their healthcare systems. Also, other key actors in the healthcare industry should contribute to developing policies on the use of AI in healthcare systems.
Collapse
Affiliation(s)
- Nkosi Nkosi Botha
- Department of Health, Physical Education and Recreation, University of Cape Coast, Cape Coast, Ghana.
- Air Force Medical Centre, Armed Forces Medical Services, Air Force Base, Takoradi, Ghana.
| | - Cynthia E Segbedzi
- Department of Health, Physical Education and Recreation, University of Cape Coast, Cape Coast, Ghana
| | - Victor K Dumahasi
- Institute of Environmental and Sanitation Studies, Environmental Science, College of Basic and Applied Sciences, University of Ghana, Legon, Ghana
| | - Samuel Maneen
- Department of Health, Physical Education and Recreation, University of Cape Coast, Cape Coast, Ghana
| | - Ruby V Kodom
- Department of Health Services Management/Distance Education, University of Ghana, Legon, Ghana
| | - Ivy S Tsedze
- Department of Adult Health, School of Nursing and Midwifery, University of Cape Coast, Cape Coast, Ghana
| | - Lucy A Akoto
- Air Force Medical Centre, Armed Forces Medical Services, Air Force Base, Takoradi, Ghana
| | | | - Obed U Lasim
- Department of Health Information Management, School of Allied Health Sciences, University of Cape Coast, Cape Coast, Ghana
| | - Edward W Ansah
- Department of Health, Physical Education and Recreation, University of Cape Coast, Cape Coast, Ghana
| |
Collapse
|
41
|
McLean KA, Sgrò A, Brown LR, Buijs LF, Mozolowski K, Daines L, Cresswell K, Potter MA, Bouamrane MM, Harrison EM. Implementation of digital remote postoperative monitoring in routine practice: a qualitative study of barriers and facilitators. BMC Med Inform Decis Mak 2024; 24:307. [PMID: 39434121 PMCID: PMC11492749 DOI: 10.1186/s12911-024-02670-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Accepted: 09/06/2024] [Indexed: 10/23/2024] Open
Abstract
INTRODUCTION Remote monitoring can strengthen postoperative care in the community and minimise the burden of complications. However, implementation requires a clear understanding of how to sustainably integrate such complex interventions into existing care pathways. This study aimed to explore perceptions of potential facilitators and barriers to the implementation of digital remote postoperative monitoring from key stakeholders and derive recommendations for an implementable service. METHODS A qualitative implementation study was conducted of digital remote postoperative wound monitoring across two UK tertiary care hospitals. All enrolled patients undergoing general surgery, and all staff involved in postoperative care were eligible. Criterion-based purposeful sampling was used to select stakeholders for semi-structured interviews on their perspectives and experiences of digital remote postoperative monitoring. A theory-informed deductive-inductive qualitative analysis was conducted; drawing on normalisation process theory (NPT) to determine facilitators for and barriers to implementation within routine care. RESULTS There were 28 semi-structured interviews conducted with patients (n = 14) and healthcare professionals (n = 14). Remote postoperative monitoring was perceived to fulfil an unmet need in facilitating the diagnosis and treatment of postoperative complications. Participants perceived clear benefit to both the delivery of health services, and patient outcomes and experience, but some were concerned that this may not be equally shared due to potential issues with accessibility. The COVID-19 pandemic demonstrated telemedicine services are feasible to deliver and acceptable to participants, with examples of nurse-led remote postoperative monitoring currently supported within local care pathways. However, there was a discrepancy between patients' expectations regarding digital health to provide more personalised care, and the capacity of healthcare staff to deliver on these. Without further investment into IT infrastructure and allocation of staff, healthcare staff felt remote postoperative monitoring should be prioritised only for patients at the highest risk of complications. CONCLUSION The COVID-19 pandemic has sparked the digital transformation of international health systems, yet the potential of digital health interventions has yet to be realised. The benefits to stakeholders are clear, and if health systems seek to meet governmental policy and patient expectations, there needs to be greater organisational strategy and investment to ensure appropriate deployment and adoption into routine care. TRIAL REGISTRATION NCT05069103.
Collapse
Affiliation(s)
- Kenneth A McLean
- Department of Clinical Surgery, University of Edinburgh, 51 Little France Crescent, Edinburgh, EH16 4SA, UK.
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, Edinburgh, EH16 4UX, UK.
| | - Alessandro Sgrò
- Colorectal Unit, Western General Hospital, Edinburgh, EH4 2XU, UK
| | - Leo R Brown
- Department of Clinical Surgery, University of Edinburgh, 51 Little France Crescent, Edinburgh, EH16 4SA, UK
| | - Louis F Buijs
- Colorectal Unit, Western General Hospital, Edinburgh, EH4 2XU, UK
| | | | - Luke Daines
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, Edinburgh, EH16 4UX, UK
| | - Kathrin Cresswell
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, Edinburgh, EH16 4UX, UK
| | - Mark A Potter
- Colorectal Unit, Western General Hospital, Edinburgh, EH4 2XU, UK
| | - Matt-Mouley Bouamrane
- Division of Computing Science, Faculty of Natural Sciences, University of Stirling, Stirling, UK
| | - Ewen M Harrison
- Department of Clinical Surgery, University of Edinburgh, 51 Little France Crescent, Edinburgh, EH16 4SA, UK.
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, Edinburgh, EH16 4UX, UK.
| |
Collapse
|
42
|
Sampene AK, Li C, Wiredu J. Unravelling the shift: exploring consumers' adoption or resistance of E-Pharmacy through behavioural reasoning theory. BMC Public Health 2024; 24:2789. [PMID: 39394074 PMCID: PMC11475331 DOI: 10.1186/s12889-024-20265-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Accepted: 10/03/2024] [Indexed: 10/13/2024] Open
Abstract
In the ever-evolving nature of the healthcare sector, the advent of electronic-Pharmacy introduces a dynamic shift in how consumers acquire and access medical and pharmaceutical products. The research utilized the behaviour reasoning theory. The study evaluated the reasons for adopting and against adopting electronic-Pharmacy. By employing the qualitative approach, this study unravels rich contextual and narrative insights, shedding light on the complexities of individual decision-making processes. The study received responses from 28 through an in-depth interview, and thematic data analysis was employed for the data analysis. The outcome of the research is summarized as follows. The respondents indicated that essential reasons for adopting electronic pharmacy services include convenience and accessibility, prescription management, cost and affordability, logistics, and timely delivery. On the other hand, the reasons against adoption include trust and security concerns, regulatory challenges and legal uncertainties, lack of internet access and privacy concerns. As technology changes healthcare delivery, this research closes the knowledge gap between theory and practice by offering crucial insights into the behavioural aspects influencing electronic-Pharmacy adoption or resistance. The findings are anticipated to significantly impact the academic discourse surrounding electronic-health and the practical implementation of strategies to enhance the integration of electronic-Pharmacy services into conventional healthcare systems.
Collapse
Affiliation(s)
| | - Cai Li
- School of Management, Jiangsu University, Zhenjiang, Jiangsu, 212013, China.
| | - John Wiredu
- School of Management, Northwestern Polytechnical University, Xi'an, Shaanxi, 710072, People's Republic of China
| |
Collapse
|
43
|
Marion S, Ghazal L, Roth T, Shanahan K, Thom B, Chino F. Prioritizing Patient-Centered Care in a World of Increasingly Advanced Technologies and Disconnected Care. Semin Radiat Oncol 2024; 34:452-462. [PMID: 39271280 DOI: 10.1016/j.semradonc.2024.07.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/15/2024]
Abstract
With more treatment options in oncology lead to better outcomes and more favorable side effect profiles, patients are living longer-with higher quality of life-than ever, with a growing survivor population. As the needs of patients and providers evolve, and technology advances, cancer care is subject to change. This review explores the myriad of changes in the current oncology landscape with a focus on the patient perspective and patient-centered care.
Collapse
Affiliation(s)
- Sarah Marion
- Department of Internal Medicine, The University of Pennsylvania Health System, Philadelphia, PA
| | - Lauren Ghazal
- University of Rochester, School of Nursing, Rochester, NY
| | - Toni Roth
- Memorial Sloan Kettering Cancer Center, Medical Physics, New York, NY
| | | | - Bridgette Thom
- University of North Carolina, School of Social Work, Chapel Hill, NC
| | - Fumiko Chino
- Memorial Sloan Kettering Cancer Center, Radiation Oncology, New York, NY.
| |
Collapse
|
44
|
Pham TD, Teh MT, Chatzopoulou D, Holmes S, Coulthard P. Artificial Intelligence in Head and Neck Cancer: Innovations, Applications, and Future Directions. Curr Oncol 2024; 31:5255-5290. [PMID: 39330017 PMCID: PMC11430806 DOI: 10.3390/curroncol31090389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Revised: 09/01/2024] [Accepted: 09/03/2024] [Indexed: 09/28/2024] Open
Abstract
Artificial intelligence (AI) is revolutionizing head and neck cancer (HNC) care by providing innovative tools that enhance diagnostic accuracy and personalize treatment strategies. This review highlights the advancements in AI technologies, including deep learning and natural language processing, and their applications in HNC. The integration of AI with imaging techniques, genomics, and electronic health records is explored, emphasizing its role in early detection, biomarker discovery, and treatment planning. Despite noticeable progress, challenges such as data quality, algorithmic bias, and the need for interdisciplinary collaboration remain. Emerging innovations like explainable AI, AI-powered robotics, and real-time monitoring systems are poised to further advance the field. Addressing these challenges and fostering collaboration among AI experts, clinicians, and researchers is crucial for developing equitable and effective AI applications. The future of AI in HNC holds significant promise, offering potential breakthroughs in diagnostics, personalized therapies, and improved patient outcomes.
Collapse
Affiliation(s)
- Tuan D. Pham
- Barts and The London School of Medicine and Dentistry, Queen Mary University of London, Turner Street, London E1 2AD, UK; (M.-T.T.); (D.C.); (S.H.); (P.C.)
| | | | | | | | | |
Collapse
|
45
|
Sodhi R, Vatsyayan V, Panibatla V, Sayyad K, Williams J, Pattery T, Pal A. Impact of a pilot mHealth intervention on treatment outcomes of TB patients seeking care in the private sector using Propensity Scores Matching-Evidence collated from New Delhi, India. PLOS DIGITAL HEALTH 2024; 3:e0000421. [PMID: 39259731 PMCID: PMC11389929 DOI: 10.1371/journal.pdig.0000421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 07/23/2024] [Indexed: 09/13/2024]
Abstract
Mobile health applications called Digital Adherence Technologies (DATs), are increasingly used for improving treatment adherence among Tuberculosis patients to attain cure, and/or other chronic diseases requiring long-term and complex medication regimens. These DATs are found to be useful in resource-limited settings because of their cost efficiency in reaching out to vulnerable groups (providing pill and clinic visit reminders, relevant health information, and motivational messages) or those staying in remote or rural areas. Despite their growing ubiquity, there is very limited evidence on how DATs improve healthcare outcomes. We analyzed the uptake of DATs in an urban setting (DS-DOST, powered by Connect for LifeTM, Johnson & Johnson) among different patient groups accessing TB services in New Delhi, India, and subsequently assessed its impact in improving patient engagement and treatment outcomes. This study aims to understand the uptake patterns of a digital adherence technology and its impact in improving follow-ups and treatment outcomes among TB patients. Propensity choice modelling was used to create balanced treated and untreated patient datasets, before applying simple ordinary least square and logistic regression methods to estimate the causal impact of the intervention on the number of follow-ups made with the patient and treatment outcomes. After controlling for potential confounders, it was found that patients who installed and utilized DS-DOST application received an average of 6.4 (95% C.I. [5.32 to 7.557]) additional follow-ups, relative to those who did not utilize the application. This translates to a 58% increase. They also had a 245% higher likelihood of treatment success (Odds ratio: 3.458; 95% C.I. [1.709 to 6.996]).
Collapse
Affiliation(s)
| | | | | | | | - Jason Williams
- Disease Management Programs, Global Public Health at Johnson & Johnson, Germany
| | - Theresa Pattery
- Disease Management Programs, Global Public Health at Johnson & Johnson, Germany
| | - Arnab Pal
- William J Clinton Foundation, New Delhi, India
| |
Collapse
|
46
|
Alammari DM, Melebari RE, Alshaikh JA, Alotaibi LB, Basabeen HS, Saleh AF. Beyond Boundaries: The Role of Artificial Intelligence in Shaping the Future Careers of Medical Students in Saudi Arabia. Cureus 2024; 16:e69332. [PMID: 39398766 PMCID: PMC11471046 DOI: 10.7759/cureus.69332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/13/2024] [Indexed: 10/15/2024] Open
Abstract
INTRODUCTION Artificial intelligence (AI) stands at the forefront of revolutionizing healthcare, wielding its computational prowess to navigate the labyrinth of medical data with unprecedented precision. In this study, we delved into the perspectives of medical students in the Kingdom of Saudi Arabia (KSA) regarding AI's seismic impact on their careers and the medical landscape. METHODS A cross-sectional study conducted from February to December 2023 examined the impact of AI on the future of medical students' careers in KSA, surveying approximately 400 participants, including Saudi medical students and interns, and uncovering a fascinating tapestry of perceptions. RESULTS Astonishingly, 75.4% of respondents boasted familiarity with AI, heralding its transformative potential. A resounding 88.9% lauded its capacity to enrich medical education, marking a paradigm shift in learning approaches. However, amidst this wave of optimism, shadows of apprehension loomed. A staggering 42.5% harbored concerns of AI precipitating job displacement, while 34.4% envisioned a future where AI usurps traditional doctor roles. Despite this dichotomy, there existed a unanimous recognition of the symbiotic relationship between AI and human healthcare professionals, heralding an era of collaborative synergy. CONCLUSION Our findings underscored a critical need for educational initiatives to assuage fears and facilitate the seamless integration of AI into clinical practice. Moreover, AI's burgeoning influence in diagnostic radiology and personalized healthcare plans emerged as catalysts propelling the domain of precision medicine into uncharted realms of innovation. As AI reshapes the contours of healthcare delivery, it not only promises unparalleled efficiency but also holds the key to unlocking new frontiers in treatment outcomes and accessibility, heralding a transformative epoch in the annals of medicine.
Collapse
Affiliation(s)
- Dalia M Alammari
- Pathology and Immunology, Ibn Sina National College for Medical Studies, Jeddah, SAU
| | - Rola E Melebari
- College of Medicine, Ibn Sina National College for Medical Studies, Jeddah, SAU
| | - Jumanah A Alshaikh
- College of Medicine, Ibn Sina National College for Medical Studies, Jeddah, SAU
| | - Lara B Alotaibi
- College of Medicine, Ibn Sina National College for Medical Studies, Jeddah, SAU
| | - Hanan S Basabeen
- College of Medicine, Ibn Sina National College for Medical Studies, Jeddah, SAU
| | - Alanoud F Saleh
- College of Medicine, Ibn Sina National College for Medical Studies, Jeddah, SAU
| |
Collapse
|
47
|
Desolda G, Dimauro G, Esposito A, Lanzilotti R, Matera M, Zancanaro M. A Human-AI interaction paradigm and its application to rhinocytology. Artif Intell Med 2024; 155:102933. [PMID: 39094227 DOI: 10.1016/j.artmed.2024.102933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 07/17/2024] [Accepted: 07/19/2024] [Indexed: 08/04/2024]
Abstract
This article explores Human-Centered Artificial Intelligence (HCAI) in medical cytology, with a focus on enhancing the interaction with AI. It presents a Human-AI interaction paradigm that emphasizes explainability and user control of AI systems. It is an iterative negotiation process based on three interaction strategies aimed to (i) elaborate the system outcomes through iterative steps (Iterative Exploration), (ii) explain the AI system's behavior or decisions (Clarification), and (iii) allow non-expert users to trigger simple retraining of the AI model (Reconfiguration). This interaction paradigm is exploited in the redesign of an existing AI-based tool for microscopic analysis of the nasal mucosa. The resulting tool is tested with rhinocytologists. The article discusses the analysis of the results of the conducted evaluation and outlines lessons learned that are relevant for AI in medicine.
Collapse
Affiliation(s)
- Giuseppe Desolda
- Department of Computer Science, University of Bari Aldo Moro, Via E. Orabona 4, Bari, 70125, Italy.
| | - Giovanni Dimauro
- Department of Computer Science, University of Bari Aldo Moro, Via E. Orabona 4, Bari, 70125, Italy.
| | - Andrea Esposito
- Department of Computer Science, University of Bari Aldo Moro, Via E. Orabona 4, Bari, 70125, Italy.
| | - Rosa Lanzilotti
- Department of Computer Science, University of Bari Aldo Moro, Via E. Orabona 4, Bari, 70125, Italy.
| | - Maristella Matera
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Via Ponzio 34/5, Milan, 20133, Italy.
| | - Massimo Zancanaro
- Department of Psychology and Cognitive Science, University of Trento, Corso Bettini 31, Rovereto, 38068, Italy; Fondazione Bruno Kessler, Povo, Trento, 38123, Italy.
| |
Collapse
|
48
|
Platt J, Nong P, Smiddy R, Hamasha R, Carmona Clavijo G, Richardson J, Kardia SLR. Public comfort with the use of ChatGPT and expectations for healthcare. J Am Med Inform Assoc 2024; 31:1976-1982. [PMID: 38960730 PMCID: PMC11339496 DOI: 10.1093/jamia/ocae164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 06/05/2024] [Accepted: 06/18/2024] [Indexed: 07/05/2024] Open
Abstract
OBJECTIVES To examine whether comfort with the use of ChatGPT in society differs from comfort with other uses of AI in society and to identify whether this comfort and other patient characteristics such as trust, privacy concerns, respect, and tech-savviness are associated with expected benefit of the use of ChatGPT for improving health. MATERIALS AND METHODS We analyzed an original survey of U.S. adults using the NORC AmeriSpeak Panel (n = 1787). We conducted paired t-tests to assess differences in comfort with AI applications. We conducted weighted univariable regression and 2 weighted logistic regression models to identify predictors of expected benefit with and without accounting for trust in the health system. RESULTS Comfort with the use of ChatGPT in society is relatively low and different from other, common uses of AI. Comfort was highly associated with expecting benefit. Other statistically significant factors in multivariable analysis (not including system trust) included feeling respected and low privacy concerns. Females, younger adults, and those with higher levels of education were less likely to expect benefits in models with and without system trust, which was positively associated with expecting benefits (P = 1.6 × 10-11). Tech-savviness was not associated with the outcome. DISCUSSION Understanding the impact of large language models (LLMs) from the patient perspective is critical to ensuring that expectations align with performance as a form of calibrated trust that acknowledges the dynamic nature of trust. CONCLUSION Including measures of system trust in evaluating LLMs could capture a range of issues critical for ensuring patient acceptance of this technological innovation.
Collapse
Affiliation(s)
- Jodyn Platt
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI 48109, United States
| | - Paige Nong
- Division of Health Policy & Management, University of Minnesota School of Public Health, Minneapolis, MN 55455, United States
| | - Renée Smiddy
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI 48109, United States
| | - Reema Hamasha
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI 48109, United States
| | - Gloria Carmona Clavijo
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI 48109, United States
| | - Joshua Richardson
- Galter Health Sciences Library, Northwestern University Feinberg School of Medicine, Chicago, IL 60611, United States
| | - Sharon L R Kardia
- Department of Epidemiology, University of Michigan School of Public Health, Ann Arbor, MI 48109, United States
| |
Collapse
|
49
|
Funer F, Schneider D, Heyen NB, Aichinger H, Klausen AD, Tinnemeyer S, Liedtke W, Salloch S, Bratan T. Impacts of Clinical Decision Support Systems on the Relationship, Communication, and Shared Decision-Making Between Health Care Professionals and Patients: Multistakeholder Interview Study. J Med Internet Res 2024; 26:e55717. [PMID: 39178023 PMCID: PMC11380058 DOI: 10.2196/55717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 05/02/2024] [Accepted: 06/07/2024] [Indexed: 08/24/2024] Open
Abstract
BACKGROUND Clinical decision support systems (CDSSs) are increasingly being introduced into various domains of health care. Little is known so far about the impact of such systems on the health care professional-patient relationship, and there is a lack of agreement about whether and how patients should be informed about the use of CDSSs. OBJECTIVE This study aims to explore, in an empirically informed manner, the potential implications for the health care professional-patient relationship and to underline the importance of this relationship when using CDSSs for both patients and future professionals. METHODS Using a methodological triangulation, 15 medical students and 12 trainee nurses were interviewed in semistructured interviews and 18 patients were involved in focus groups between April 2021 and April 2022. All participants came from Germany. Three examples of CDSSs covering different areas of health care (ie, surgery, nephrology, and intensive home care) were used as stimuli in the study to identify similarities and differences regarding the use of CDSSs in different fields of application. The interview and focus group transcripts were analyzed using a structured qualitative content analysis. RESULTS From the interviews and focus groups analyzed, three topics were identified that interdependently address the interactions between patients and health care professionals: (1) CDSSs and their impact on the roles of and requirements for health care professionals, (2) CDSSs and their impact on the relationship between health care professionals and patients (including communication requirements for shared decision-making), and (3) stakeholders' expectations for patient education and information about CDSSs and their use. CONCLUSIONS The results indicate that using CDSSs could restructure established power and decision-making relationships between (future) health care professionals and patients. In addition, respondents expected that the use of CDSSs would involve more communication, so they anticipated an increased time commitment. The results shed new light on the existing discourse by demonstrating that the anticipated impact of CDSSs on the health care professional-patient relationship appears to stem less from the function of a CDSS and more from its integration in the relationship. Therefore, the anticipated effects on the relationship between health care professionals and patients could be specifically addressed in patient information about the use of CDSSs.
Collapse
Affiliation(s)
- Florian Funer
- Institute for Ethics and History of Medicine, Eberhard Karls University Tuebingen, Tuebingen, Germany
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | - Diana Schneider
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Nils B Heyen
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Heike Aichinger
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Andrea Diana Klausen
- Institute for Medical Informatics, University Medical Center - RWTH Aachen, Aachen, Germany
| | - Sara Tinnemeyer
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | - Wenke Liedtke
- Department of Social Work, Protestant University of Applied Sciences Rhineland-Westphalia-Lippe, Bochum, Germany
- Ethics and its Didactics, Faculty of Theology, University of Greifswald, Greifswald, Germany
| | - Sabine Salloch
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | - Tanja Bratan
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| |
Collapse
|
50
|
Baghdadi LR, Mobeirek AA, Alhudaithi DR, Albenmousa FA, Alhadlaq LS, Alaql MS, Alhamlan SA. Patients' Attitudes Toward the Use of Artificial Intelligence as a Diagnostic Tool in Radiology in Saudi Arabia: Cross-Sectional Study. JMIR Hum Factors 2024; 11:e53108. [PMID: 39110973 PMCID: PMC11339559 DOI: 10.2196/53108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 03/15/2024] [Accepted: 06/22/2024] [Indexed: 08/25/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) is widely used in various medical fields, including diagnostic radiology as a tool for greater efficiency, precision, and accuracy. The integration of AI as a radiological diagnostic tool has the potential to mitigate delays in diagnosis, which could, in turn, impact patients' prognosis and treatment outcomes. The literature shows conflicting results regarding patients' attitudes to AI as a diagnostic tool. To the best of our knowledge, no similar study has been conducted in Saudi Arabia. OBJECTIVE The objectives of this study are to examine patients' attitudes toward the use of AI as a tool in diagnostic radiology at King Khalid University Hospital, Saudi Arabia. Additionally, we sought to explore potential associations between patients' attitudes and various sociodemographic factors. METHODS This descriptive-analytical cross-sectional study was conducted in a tertiary care hospital. Data were collected from patients scheduled for radiological imaging through a validated self-administered questionnaire. The main outcome was to measure patients' attitudes to the use of AI in radiology by calculating mean scores of 5 factors, distrust and accountability (factor 1), procedural knowledge (factor 2), personal interaction and communication (factor 3), efficiency (factor 4), and methods of providing information to patients (factor 5). Data were analyzed using the student t test, one-way analysis of variance followed by post hoc and multivariable analysis. RESULTS A total of 382 participants (n=273, 71.5% women and n=109, 28.5% men) completed the surveys and were included in the analysis. The mean age of the respondents was 39.51 (SD 13.26) years. Participants favored physicians over AI for procedural knowledge, personal interaction, and being informed. However, the participants demonstrated a neutral attitude for distrust and accountability and for efficiency. Marital status was found to be associated with distrust and accountability, procedural knowledge, and personal interaction. Associations were also found between self-reported health status and being informed and between the field of specialization and distrust and accountability. CONCLUSIONS Patients were keen to understand the work of AI in radiology but favored personal interaction with a radiologist. Patients were impartial toward AI replacing radiologists and the efficiency of AI, which should be a consideration in future policy development and integration. Future research involving multicenter studies in different regions of Saudi Arabia is required.
Collapse
Affiliation(s)
- Leena R Baghdadi
- Department of Family and Community Medicine, College of Medicine, King Saud University, Riyadh, Saudi Arabia
| | - Arwa A Mobeirek
- College of Medicine, King Saud University, Riyadh, Saudi Arabia
| | | | | | - Leen S Alhadlaq
- College of Medicine, King Saud University, Riyadh, Saudi Arabia
| | - Maisa S Alaql
- College of Medicine, King Saud University, Riyadh, Saudi Arabia
| | | |
Collapse
|