1
|
Rittenberg BSP, Holland CW, Barnhart GE, Gaudreau SM, Neyedli HF. Trust with increasing and decreasing reliability. HUMAN FACTORS 2024; 66:2569-2589. [PMID: 38445652 PMCID: PMC11487872 DOI: 10.1177/00187208241228636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 12/20/2023] [Indexed: 03/07/2024]
Abstract
OBJECTIVE The primary purpose was to determine how trust changes over time when automation reliability increases or decreases. A secondary purpose was to determine how task-specific self-confidence is associated with trust and reliability level. BACKGROUND Both overtrust and undertrust can be detrimental to system performance; therefore, the temporal dynamics of trust with changing reliability level need to be explored. METHOD Two experiments used a dominant-color identification task, where automation provided a recommendation to users, with the reliability of the recommendation changing over 300 trials. In Experiment 1, two groups of participants interacted with the system: one group started with a 50% reliable system which increased to 100%, while the other used a system that decreased from 100% to 50%. Experiment 2 included a group where automation reliability increased from 70% to 100%. RESULTS Trust was initially high in the decreasing group and then declined as reliability level decreased; however, trust also declined in the 50% increasing reliability group. Furthermore, when user self-confidence increased, automation reliability had a greater influence on trust. In Experiment 2, the 70% increasing reliability group showed increased trust in the system. CONCLUSION Trust does not always track the reliability of automated systems; in particular, it is difficult for trust to recover once the user has interacted with a low reliability system. APPLICATIONS This study provides initial evidence into the dynamics of trust for automation that gets better over time suggesting that users should only start interacting with automation when it is sufficiently reliable.
Collapse
|
2
|
Mittelstädt JM, Maier J, Goerke P, Zinn F, Hermes M. Large language models can outperform humans in social situational judgments. Sci Rep 2024; 14:27449. [PMID: 39523436 PMCID: PMC11551142 DOI: 10.1038/s41598-024-79048-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Accepted: 11/06/2024] [Indexed: 11/16/2024] Open
Abstract
Large language models (LLM) have been a catalyst for the public interest in artificial intelligence (AI). These technologies perform some knowledge-based tasks better and faster than human beings. However, whether AIs can correctly assess social situations and devise socially appropriate behavior, is still unclear. We conducted an established Situational Judgment Test (SJT) with five different chatbots and compared their results with responses of human participants (N = 276). Claude, Copilot and you.com's smart assistant performed significantly better than humans in proposing suitable behaviors in social situations. Moreover, their effectiveness rating of different behavior options aligned well with expert ratings. These results indicate that LLMs are capable of producing adept social judgments. While this constitutes an important requirement for the use as virtual social assistants, challenges and risks are still associated with their wide-spread use in social contexts.
Collapse
Affiliation(s)
- Justin M Mittelstädt
- Department of Aviation and Space Psychology, German Aerospace Center, Institute of Aerospace Medicine, 22335, Hamburg, Germany.
| | - Julia Maier
- Department of Aviation and Space Psychology, German Aerospace Center, Institute of Aerospace Medicine, 22335, Hamburg, Germany
| | - Panja Goerke
- Department of Aviation and Space Psychology, German Aerospace Center, Institute of Aerospace Medicine, 22335, Hamburg, Germany
| | - Frank Zinn
- Department of Aviation and Space Psychology, German Aerospace Center, Institute of Aerospace Medicine, 22335, Hamburg, Germany
| | - Michael Hermes
- Department of Aviation and Space Psychology, German Aerospace Center, Institute of Aerospace Medicine, 22335, Hamburg, Germany
| |
Collapse
|
3
|
Wijaya TT, Yu Q, Cao Y, He Y, Leung FKS. Latent Profile Analysis of AI Literacy and Trust in Mathematics Teachers and Their Relations with AI Dependency and 21st-Century Skills. Behav Sci (Basel) 2024; 14:1008. [PMID: 39594308 PMCID: PMC11591428 DOI: 10.3390/bs14111008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 10/24/2024] [Accepted: 10/28/2024] [Indexed: 11/28/2024] Open
Abstract
Artificial Intelligence (AI) technology, particularly generative AI, has positively impacted education by enhancing mathematics instruction with personalized learning experiences and improved data analysis. Nonetheless, variations in AI literacy, trust in AI, and dependency on these technologies among mathematics teachers can significantly influence their development of 21st-century skills such as self-confidence, problem-solving, critical thinking, creative thinking, and collaboration. This study aims to identify distinct profiles of AI literacy, trust, and dependency among mathematics teachers and examines how these profiles correlate with variations in the aforementioned skills. Using a cross-sectional research design, the study collected data from 489 mathematics teachers in China. A robust three-step latent profile analysis method was utilized to analyze the data. The research revealed five distinct profiles of AI literacy and trust among the teachers: (1) Basic AI Engagement; (2) Developing AI Literacy, Skeptical of AI; (3) Balanced AI Competence; (4) Advanced AI Integration; and (5) AI Expertise and Confidence. The study found that an increase in AI literacy and trust directly correlates with an increase in AI dependency and a decrease in skills such as self-confidence, problem-solving, critical thinking, creative thinking, and collaboration. The findings underscore the need for careful integration of AI technologies in educational settings. Excessive reliance on AI can lead to detrimental dependencies, which may hinder the development of essential 21st-century skills. The study contributes to the existing literature by providing empirical evidence on the impact of AI literacy and trust on the professional development of mathematics teachers. It also offers practical implications for educational policymakers and institutions to consider balanced approaches to AI integration, ensuring that AI enhances rather than replaces the critical thinking and problem-solving capacities of educators.
Collapse
Affiliation(s)
- Tommy Tanu Wijaya
- School of mathematical Sciences, Beijing Normal University, Beijing 100088, China; (Q.Y.); (Y.C.)
- National Research Institute for Mathematics Teaching Materials, Beijing 100190, China
| | - Qingchun Yu
- School of mathematical Sciences, Beijing Normal University, Beijing 100088, China; (Q.Y.); (Y.C.)
- National Research Institute for Mathematics Teaching Materials, Beijing 100190, China
| | - Yiming Cao
- School of mathematical Sciences, Beijing Normal University, Beijing 100088, China; (Q.Y.); (Y.C.)
- National Research Institute for Mathematics Teaching Materials, Beijing 100190, China
| | - Yahan He
- School of Mathematical Sciences, Capital Normal University, Beijing 100048, China;
| | - Frederick K. S. Leung
- College of Education for the Future, Beijing Normal University, Zhuhai 519087, China
| |
Collapse
|
4
|
Rana MM, Siddiqee MS, Sakib MN, Ahamed MR. Assessing AI adoption in developing country academia: A trust and privacy-augmented UTAUT framework. Heliyon 2024; 10:e37569. [PMID: 39315142 PMCID: PMC11417232 DOI: 10.1016/j.heliyon.2024.e37569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 09/04/2024] [Accepted: 09/05/2024] [Indexed: 09/25/2024] Open
Abstract
The rapid evolution of Artificial Intelligence (AI) and its widespread adoption have given rise to a critical need for understanding the underlying factors that shape users' behavioral intentions. Therefore, the main objective of this study is to explain user perceived behavioral intentions and use behavior of AI technologies for academic purposes in a developing country. This study has adopted the unified theory of acceptance and use of technology (UTAUT) model and extended it with two dimensions: trust and privacy. Data have been collected from 310 AI users including teachers, researchers, and students. This study finds that users' behavioral intention is positively and significantly associated with trust, social influence, effort expectancy, and performance expectancy. Privacy, on the other hand, has a negative yet significant relationship with behavioral intention unveiling that concerns over privacy can deter users from intending to use AI technologies which is a valuable insight for developers and educators. In determining use behavior, facilitating condition, behavioral intention, and privacy have significant positive impact. This study hasn't found any significant relationship between trust and use behavior elucidating that service providers should have unwavering focus on security measures, credible endorsements, and transparency to build user confidence. In an era dominated by the fourth industrial revolution, this research underscores the pivotal roles of trust and privacy in technology adoption. In addition, this study sheds light on users' perspective to effectively align AI-based technologies with the education system of developing countries. The practical implications encompass insights for service providers, educational institutions, and policymakers, facilitating the smooth adoption of AI technologies in developing countries while emphasizing the importance of trust, privacy, and ongoing refinement.
Collapse
Affiliation(s)
- Md. Masud Rana
- Department of Management, University of Dhaka, Bangladesh
| | | | | | - Md. Rafi Ahamed
- Department of International Business, University of Dhaka, Bangladesh
| |
Collapse
|
5
|
Abstract
Artificial intelligence (AI) has the potential to improve human decision-making by providing decision recommendations and problem-relevant information to assist human decision-makers. However, the full realization of the potential of human-AI collaboration continues to face several challenges. First, the conditions that support complementarity (i.e., situations in which the performance of a human with AI assistance exceeds the performance of an unassisted human or the AI in isolation) must be understood. This task requires humans to be able to recognize situations in which the AI should be leveraged and to develop new AI systems that can learn to complement the human decision-maker. Second, human mental models of the AI, which contain both expectations of the AI and reliance strategies, must be accurately assessed. Third, the effects of different design choices for human-AI interaction must be understood, including both the timing of AI assistance and the amount of model information that should be presented to the human decision-maker to avoid cognitive overload and ineffective reliance strategies. In response to each of these three challenges, we present an interdisciplinary perspective based on recent empirical and theoretical findings and discuss new research directions.
Collapse
Affiliation(s)
- Mark Steyvers
- Department of Cognitive Sciences, University of California, Irvine
| | - Aakriti Kumar
- Department of Cognitive Sciences, University of California, Irvine
| |
Collapse
|
6
|
Jiang P, Niu W, Wang Q, Yuan R, Chen K. Understanding Users' Acceptance of Artificial Intelligence Applications: A Literature Review. Behav Sci (Basel) 2024; 14:671. [PMID: 39199067 PMCID: PMC11351494 DOI: 10.3390/bs14080671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 07/30/2024] [Accepted: 08/01/2024] [Indexed: 09/01/2024] Open
Abstract
In recent years, with the continuous expansion of artificial intelligence (AI) application forms and fields, users' acceptance of AI applications has attracted increasing attention from scholars and business practitioners. Although extant studies have extensively explored user acceptance of different AI applications, there is still a lack of understanding of the roles played by different AI applications in human-AI interaction, which may limit the understanding of inconsistent findings about user acceptance of AI. This study addresses this issue by conducting a systematic literature review on AI acceptance research in leading journals of Information Systems and Marketing disciplines from 2020 to 2023. Based on a review of 80 papers, this study made contributions by (i) providing an overview of methodologies and theoretical frameworks utilized in AI acceptance research; (ii) summarizing the key factors, potential mechanisms, and theorization of users' acceptance response to AI service providers and AI task substitutes, respectively; and (iii) proposing opinions on the limitations of extant research and providing guidance for future research.
Collapse
Affiliation(s)
- Pengtao Jiang
- School of Information Science and Engineering, NingboTech University, Ningbo 315100, China;
- Nottingham University Business School China, University of Nottingham Ningbo China, Ningbo 315100, China;
| | - Wanshu Niu
- Business School, Ningbo University, Ningbo 315211, China;
| | - Qiaoli Wang
- School of Management, Zhejiang University, Hangzhou 310058, China;
| | - Ruizhi Yuan
- Nottingham University Business School China, University of Nottingham Ningbo China, Ningbo 315100, China;
| | - Keyu Chen
- Business School, Ningbo University, Ningbo 315211, China;
| |
Collapse
|
7
|
Kostick-Quenet K, Lang BH, Smith J, Hurley M, Blumenthal-Barby J. Trust criteria for artificial intelligence in health: normative and epistemic considerations. JOURNAL OF MEDICAL ETHICS 2024; 50:544-551. [PMID: 37979976 PMCID: PMC11101592 DOI: 10.1136/jme-2023-109338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 11/02/2023] [Indexed: 11/20/2023]
Abstract
Rapid advancements in artificial intelligence and machine learning (AI/ML) in healthcare raise pressing questions about how much users should trust AI/ML systems, particularly for high stakes clinical decision-making. Ensuring that user trust is properly calibrated to a tool's computational capacities and limitations has both practical and ethical implications, given that overtrust or undertrust can influence over-reliance or under-reliance on algorithmic tools, with significant implications for patient safety and health outcomes. It is, thus, important to better understand how variability in trust criteria across stakeholders, settings, tools and use cases may influence approaches to using AI/ML tools in real settings. As part of a 5-year, multi-institutional Agency for Health Care Research and Quality-funded study, we identify trust criteria for a survival prediction algorithm intended to support clinical decision-making for left ventricular assist device therapy, using semistructured interviews (n=40) with patients and physicians, analysed via thematic analysis. Findings suggest that physicians and patients share similar empirical considerations for trust, which were primarily epistemic in nature, focused on accuracy and validity of AI/ML estimates. Trust evaluations considered the nature, integrity and relevance of training data rather than the computational nature of algorithms themselves, suggesting a need to distinguish 'source' from 'functional' explainability. To a lesser extent, trust criteria were also relational (endorsement from others) and sometimes based on personal beliefs and experience. We discuss implications for promoting appropriate and responsible trust calibration for clinical decision-making use AI/ML.
Collapse
Affiliation(s)
- Kristin Kostick-Quenet
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, Texas, USA
| | - Benjamin H Lang
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, Texas, USA
- Department of Philosophy, University of Oxford, Oxford, Oxfordshire, UK
| | - Jared Smith
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, Texas, USA
| | - Meghan Hurley
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, Texas, USA
| | | |
Collapse
|
8
|
Cecil J, Lermer E, Hudecek MFC, Sauer J, Gaube S. Explainability does not mitigate the negative impact of incorrect AI advice in a personnel selection task. Sci Rep 2024; 14:9736. [PMID: 38679619 PMCID: PMC11056364 DOI: 10.1038/s41598-024-60220-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 04/19/2024] [Indexed: 05/01/2024] Open
Abstract
Despite the rise of decision support systems enabled by artificial intelligence (AI) in personnel selection, their impact on decision-making processes is largely unknown. Consequently, we conducted five experiments (N = 1403 students and Human Resource Management (HRM) employees) investigating how people interact with AI-generated advice in a personnel selection task. In all pre-registered experiments, we presented correct and incorrect advice. In Experiments 1a and 1b, we manipulated the source of the advice (human vs. AI). In Experiments 2a, 2b, and 2c, we further manipulated the type of explainability of AI advice (2a and 2b: heatmaps and 2c: charts). We hypothesized that accurate and explainable advice improves decision-making. The independent variables were regressed on task performance, perceived advice quality and confidence ratings. The results consistently showed that incorrect advice negatively impacted performance, as people failed to dismiss it (i.e., overreliance). Additionally, we found that the effects of source and explainability of advice on the dependent variables were limited. The lack of reduction in participants' overreliance on inaccurate advice when the systems' predictions were made more explainable highlights the complexity of human-AI interaction and the need for regulation and quality standards in HRM.
Collapse
Affiliation(s)
- Julia Cecil
- Department of Psychology, LMU Center for Leadership and People Management, LMU Munich, Munich, Germany.
| | - Eva Lermer
- Department of Psychology, LMU Center for Leadership and People Management, LMU Munich, Munich, Germany
- Department of Business Psychology, Technical University of Applied Sciences Augsburg, Augsburg, Germany
| | - Matthias F C Hudecek
- Department of Experimental Psychology, University of Regensburg, Regensburg, Germany
| | - Jan Sauer
- Department of Business Administration, University of Applied Sciences Amberg-Weiden, Weiden, Germany
| | - Susanne Gaube
- Department of Psychology, LMU Center for Leadership and People Management, LMU Munich, Munich, Germany
- UCL Global Business School for Health, University College London, London, UK
| |
Collapse
|
9
|
Campion JR, O'Connor DB, Lahiff C. Human-artificial intelligence interaction in gastrointestinal endoscopy. World J Gastrointest Endosc 2024; 16:126-135. [PMID: 38577646 PMCID: PMC10989254 DOI: 10.4253/wjge.v16.i3.126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 01/18/2024] [Accepted: 02/23/2024] [Indexed: 03/14/2024] Open
Abstract
The number and variety of applications of artificial intelligence (AI) in gastrointestinal (GI) endoscopy is growing rapidly. New technologies based on machine learning (ML) and convolutional neural networks (CNNs) are at various stages of development and deployment to assist patients and endoscopists in preparing for endoscopic procedures, in detection, diagnosis and classification of pathology during endoscopy and in confirmation of key performance indicators. Platforms based on ML and CNNs require regulatory approval as medical devices. Interactions between humans and the technologies we use are complex and are influenced by design, behavioural and psychological elements. Due to the substantial differences between AI and prior technologies, important differences may be expected in how we interact with advice from AI technologies. Human–AI interaction (HAII) may be optimised by developing AI algorithms to minimise false positives and designing platform interfaces to maximise usability. Human factors influencing HAII may include automation bias, alarm fatigue, algorithm aversion, learning effect and deskilling. Each of these areas merits further study in the specific setting of AI applications in GI endoscopy and professional societies should engage to ensure that sufficient emphasis is placed on human-centred design in development of new AI technologies.
Collapse
Affiliation(s)
- John R Campion
- Department of Gastroenterology, Mater Misericordiae University Hospital, Dublin D07 AX57, Ireland
- School of Medicine, University College Dublin, Dublin D04 C7X2, Ireland
| | - Donal B O'Connor
- Department of Surgery, Trinity College Dublin, Dublin D02 R590, Ireland
| | - Conor Lahiff
- Department of Gastroenterology, Mater Misericordiae University Hospital, Dublin D07 AX57, Ireland
- School of Medicine, University College Dublin, Dublin D04 C7X2, Ireland
| |
Collapse
|
10
|
Strickland L, Farrell S, Wilson MK, Hutchinson J, Loft S. How do humans learn about the reliability of automation? Cogn Res Princ Implic 2024; 9:8. [PMID: 38361149 PMCID: PMC10869332 DOI: 10.1186/s41235-024-00533-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 01/27/2024] [Indexed: 02/17/2024] Open
Abstract
In a range of settings, human operators make decisions with the assistance of automation, the reliability of which can vary depending upon context. Currently, the processes by which humans track the level of reliability of automation are unclear. In the current study, we test cognitive models of learning that could potentially explain how humans track automation reliability. We fitted several alternative cognitive models to a series of participants' judgements of automation reliability observed in a maritime classification task in which participants were provided with automated advice. We examined three experiments including eight between-subjects conditions and 240 participants in total. Our results favoured a two-kernel delta-rule model of learning, which specifies that humans learn by prediction error, and respond according to a learning rate that is sensitive to environmental volatility. However, we found substantial heterogeneity in learning processes across participants. These outcomes speak to the learning processes underlying how humans estimate automation reliability and thus have implications for practice.
Collapse
Affiliation(s)
- Luke Strickland
- The Future of Work Institute, Curtin University, 78 Murray Street, Perth, 6000, Australia.
| | - Simon Farrell
- The School of Psychological Science, The University of Western Australia, Crawley, Perth, Australia
| | - Micah K Wilson
- The Future of Work Institute, Curtin University, 78 Murray Street, Perth, 6000, Australia
| | - Jack Hutchinson
- The School of Psychological Science, The University of Western Australia, Crawley, Perth, Australia
| | - Shayne Loft
- The School of Psychological Science, The University of Western Australia, Crawley, Perth, Australia
| |
Collapse
|
11
|
Agudo U, Liberal KG, Arrese M, Matute H. The impact of AI errors in a human-in-the-loop process. Cogn Res Princ Implic 2024; 9:1. [PMID: 38185767 PMCID: PMC10772030 DOI: 10.1186/s41235-023-00529-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 12/12/2023] [Indexed: 01/09/2024] Open
Abstract
Automated decision-making is becoming increasingly common in the public sector. As a result, political institutions recommend the presence of humans in these decision-making processes as a safeguard against potentially erroneous or biased algorithmic decisions. However, the scientific literature on human-in-the-loop performance is not conclusive about the benefits and risks of such human presence, nor does it clarify which aspects of this human-computer interaction may influence the final decision. In two experiments, we simulate an automated decision-making process in which participants judge multiple defendants in relation to various crimes, and we manipulate the time in which participants receive support from a supposed automated system with Artificial Intelligence (before or after they make their judgments). Our results show that human judgment is affected when participants receive incorrect algorithmic support, particularly when they receive it before providing their own judgment, resulting in reduced accuracy. The data and materials for these experiments are freely available at the Open Science Framework: https://osf.io/b6p4z/ Experiment 2 was preregistered.
Collapse
Affiliation(s)
- Ujué Agudo
- Bikolabs/Biko, Pamplona, Spain
- Departamento de Psicología, Universidad de Deusto, Avda. Universidad 24, 48007, Bilbao, Spain
| | | | | | - Helena Matute
- Departamento de Psicología, Universidad de Deusto, Avda. Universidad 24, 48007, Bilbao, Spain.
| |
Collapse
|
12
|
Schreibelmayr S, Moradbakhti L, Mara M. First impressions of a financial AI assistant: differences between high trust and low trust users. Front Artif Intell 2023; 6:1241290. [PMID: 37854078 PMCID: PMC10579608 DOI: 10.3389/frai.2023.1241290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 09/05/2023] [Indexed: 10/20/2023] Open
Abstract
Calibrating appropriate trust of non-expert users in artificial intelligence (AI) systems is a challenging yet crucial task. To align subjective levels of trust with the objective trustworthiness of a system, users need information about its strengths and weaknesses. The specific explanations that help individuals avoid over- or under-trust may vary depending on their initial perceptions of the system. In an online study, 127 participants watched a video of a financial AI assistant with varying degrees of decision agency. They generated 358 spontaneous text descriptions of the system and completed standard questionnaires from the Trust in Automation and Technology Acceptance literature (including perceived system competence, understandability, human-likeness, uncanniness, intention of developers, intention to use, and trust). Comparisons between a high trust and a low trust user group revealed significant differences in both open-ended and closed-ended answers. While high trust users characterized the AI assistant as more useful, competent, understandable, and humanlike, low trust users highlighted the system's uncanniness and potential dangers. Manipulating the AI assistant's agency had no influence on trust or intention to use. These findings are relevant for effective communication about AI and trust calibration of users who differ in their initial levels of trust.
Collapse
Affiliation(s)
| | | | - Martina Mara
- Robopsychology Lab, Linz Institute of Technology, Johannes Kepler University Linz, Linz, Austria
| |
Collapse
|
13
|
Horowitz MC, Kahn L, Macdonald J, Schneider J. Adopting AI: how familiarity breeds both trust and contempt. AI & SOCIETY 2023:1-15. [PMID: 37358948 PMCID: PMC10175926 DOI: 10.1007/s00146-023-01666-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 04/03/2023] [Indexed: 06/28/2023]
Abstract
Despite pronouncements about the inevitable diffusion of artificial intelligence and autonomous technologies, in practice, it is human behavior, not technology in a vacuum, that dictates how technology seeps into-and changes-societies. To better understand how human preferences shape technological adoption and the spread of AI-enabled autonomous technologies, we look at representative adult samples of US public opinion in 2018 and 2020 on the use of four types of autonomous technologies: vehicles, surgery, weapons, and cyber defense. By focusing on these four diverse uses of AI-enabled autonomy that span transportation, medicine, and national security, we exploit the inherent variation between these AI-enabled autonomous use cases. We find that those with familiarity and expertise with AI and similar technologies were more likely to support all of the autonomous applications we tested (except weapons) than those with a limited understanding of the technology. Individuals that had already delegated the act of driving using ride-share apps were also more positive about autonomous vehicles. However, familiarity cut both ways; individuals are also less likely to support AI-enabled technologies when applied directly to their life, especially if technology automates tasks they are already familiar with operating. Finally, we find that familiarity plays little role in support for AI-enabled military applications, for which opposition has slightly increased over time. Supplementary Information The online version contains supplementary material available at 10.1007/s00146-023-01666-5.
Collapse
Affiliation(s)
| | - Lauren Kahn
- Council on Foreign Relations, Washington, DC USA
| | | | | |
Collapse
|
14
|
Zhang G, Chong L, Kotovsky K, Cagan J. Trust in an AI versus a Human teammate: The effects of teammate identity and performance on Human-AI cooperation. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2022.107536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
15
|
Gain-loss separability in human- but not computer-based changes of mind. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
16
|
Data on human decision, feedback, and confidence during an artificial intelligence-assisted decision-making task. Data Brief 2023; 46:108884. [PMID: 36691561 PMCID: PMC9860095 DOI: 10.1016/j.dib.2023.108884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/21/2022] [Accepted: 01/03/2023] [Indexed: 01/11/2023] Open
Abstract
The data are collected from a human subjects study in which 100 participants solve chess puzzle problems with artificial intelligence (AI) assistance. The participants are assigned to one of the two experimental conditions determined by the direction of the change in AI performance at problem 20: 1) high- to low-performing and 2) low- to high-performing. The dataset contains information about the participants' move before an AI suggestion, the goodness evaluation score of these moves, AI suggestion, feedback, and the participants' confidence in AI and self-confidence during three initial practice problems and 30 experimental problems. The dataset contains 100 CSV files, one per participant. There is opportunity for this dataset to be utilized in various domains that research human-AI collaboration scenarios such as human-computer interaction, psychology, computer science, and team management in engineering/business. Not only can the dataset enable further cognitive and behavioral analysis in human-AI collaboration contexts but also provide an experimental platform to develop and test future confidence calibration methods.
Collapse
|
17
|
AI-enabled investment advice: Will users buy it? COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2022.107481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
18
|
Hofeditz L, Clausen S, Rieß A, Mirbabaie M, Stieglitz S. Applying XAI to an AI-based system for candidate management to mitigate bias and discrimination in hiring. ELECTRONIC MARKETS 2022; 32:2207-2233. [PMID: 36568961 PMCID: PMC9764302 DOI: 10.1007/s12525-022-00600-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 09/30/2022] [Indexed: 06/17/2023]
Abstract
UNLABELLED Assuming that potential biases of Artificial Intelligence (AI)-based systems can be identified and controlled for (e.g., by providing high quality training data), employing such systems to augment human resource (HR)-decision makers in candidate selection provides an opportunity to make selection processes more objective. However, as the final hiring decision is likely to remain with humans, prevalent human biases could still cause discrimination. This work investigates the impact of an AI-based system's candidate recommendations on humans' hiring decisions and how this relation could be moderated by an Explainable AI (XAI) approach. We used a self-developed platform and conducted an online experiment with 194 participants. Our quantitative and qualitative findings suggest that the recommendations of an AI-based system can reduce discrimination against older and female candidates but appear to cause fewer selections of foreign-race candidates. Contrary to our expectations, the same XAI approach moderated these effects differently depending on the context. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s12525-022-00600-9.
Collapse
Affiliation(s)
- Lennart Hofeditz
- Universität Duisburg-Essen, Forsthausweg 2, 47057 Duisburg, Germany
| | - Sünje Clausen
- Universität Duisburg-Essen, Forsthausweg 2, 47057 Duisburg, Germany
| | - Alexander Rieß
- Universität Duisburg-Essen, Forsthausweg 2, 47057 Duisburg, Germany
| | - Milad Mirbabaie
- Paderborn University, Warburger Str. 100, 33098 Paderborn, Germany
| | - Stefan Stieglitz
- Universität Duisburg-Essen, Forsthausweg 2, 47057 Duisburg, Germany
| |
Collapse
|
19
|
Xiong W, Wang C, Liang M. Partner or subordinate? Sequential risky decision-making behaviors under human-machine collaboration contexts. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
20
|
Grinschgl S, Neubauer AC. Supporting Cognition With Modern Technology: Distributed Cognition Today and in an AI-Enhanced Future. Front Artif Intell 2022; 5:908261. [PMID: 35910191 PMCID: PMC9329671 DOI: 10.3389/frai.2022.908261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 06/24/2022] [Indexed: 11/29/2022] Open
Abstract
In the present article, we explore prospects for using artificial intelligence (AI) to distribute cognition via cognitive offloading (i.e., to delegate thinking tasks to AI-technologies). Modern technologies for cognitive support are rapidly developing and increasingly popular. Today, many individuals heavily rely on their smartphones or other technical gadgets to support their daily life but also their learning and work. For instance, smartphones are used to track and analyze changes in the environment, and to store and continually update relevant information. Thus, individuals can offload (i.e., externalize) information to their smartphones and refresh their knowledge by accessing it. This implies that using modern technologies such as AI empowers users via offloading and enables them to function as always-updated knowledge professionals, so that they can deploy their insights strategically instead of relying on outdated and memorized facts. This AI-supported offloading of cognitive processes also saves individuals' internal cognitive resources by distributing the task demands into their environment. In this article, we provide (1) an overview of empirical findings on cognitive offloading and (2) an outlook on how individuals' offloading behavior might change in an AI-enhanced future. More specifically, we first discuss determinants of offloading such as the design of technical tools and links to metacognition. Furthermore, we discuss benefits and risks of cognitive offloading. While offloading improves immediate task performance, it might also be a threat for users' cognitive abilities. Following this, we provide a perspective on whether individuals will make heavier use of AI-technologies for offloading in the future and how this might affect their cognition. On one hand, individuals might heavily rely on easily accessible AI-technologies which in return might diminish their internal cognition/learning. On the other hand, individuals might aim at enhancing their cognition so that they can keep up with AI-technologies and will not be replaced by them. Finally, we present own data and findings from the literature on the assumption that individuals' personality is a predictor of trust in AI. Trust in modern AI-technologies might be a strong determinant for wider appropriation and dependence on these technologies to distribute cognition and should thus be considered in an AI-enhanced future.
Collapse
|
21
|
Zhang Z, Chen Z, Xu L. Artificial intelligence and moral dilemmas: Perception of ethical decision-making in AI. JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY 2022. [DOI: 10.1016/j.jesp.2022.104327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
22
|
Abstract
Evaluating AI is a challenging task, as it requires an operative definition of intelligence and the metrics to quantify it, including amongst other factors economic drivers, depending on specific domains. From the viewpoint of AI basic research, the ability to play a game against a human has historically been adopted as a criterion of evaluation, as competition can be characterized by an algorithmic approach. Starting from the end of the 1990s, the deployment of sophisticated hardware identified a significant improvement in the ability of a machine to play and win popular games. In spite of the spectacular victory of IBM’s Deep Blue over Garry Kasparov, many objections still remain. This is due to the fact that it is not clear how this result can be applied to solve real-world problems or simulate human abilities, e.g., common sense, and also exhibit a form of generalized AI. An evaluation based uniquely on the capacity of playing games, even when enriched by the capability of learning complex rules without any human supervision, is bound to be unsatisfactory. As the internet has dramatically changed the cultural habits and social interaction of users, who continuously exchange information with intelligent agents, it is quite natural to consider cooperation as the next step in AI software evaluation. Although this concept has already been explored in the scientific literature in the fields of economics and mathematics, its consideration in AI is relatively recent and generally covers the study of cooperation between agents. This paper focuses on more complex problems involving heterogeneity (specifically, the cooperation between humans and software agents, or even robots), which are investigated by taking into account ethical issues occurring during attempts to achieve a common goal shared by both parties, with a possible result of either conflict or stalemate. The contribution of this research consists in identifying those factors (trust, autonomy, and cooperative learning) on which to base ethical guidelines in agent software programming, making cooperation a more suitable benchmark for AI applications.
Collapse
|
23
|
Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107296] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|