Minireviews Open Access
Copyright ©The Author(s) 2024. Published by Baishideng Publishing Group Inc. All rights reserved.
World J Crit Care Med. Jun 9, 2024; 13(2): 89644
Published online Jun 9, 2024. doi: 10.5492/wjccm.v13.i2.89644
Ten misconceptions regarding decision-making in critical care
Tara Ramaswamy, Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, CA 94305, United States
Jamie L Sparling, Marvin G Chang, Edward A Bittner, Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, United States
ORCID number: Edward A Bittner (0000-0002-0159-2373).
Author contributions: Bittner EA conceptualized and drafted the initial version of the manuscript; Ramaswamy T, Sparling JL, and Chang MG reviewed and substantially revised the manuscript; All authors accepted the final version of the manuscript.
Conflict-of-interest statement: All authors have no conflicts of interest to disclose.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Edward A Bittner, MD, PhD, Associate Professor, Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, United States. ebittner@mgh.harvard.edu
Received: November 7, 2023
Revised: January 25, 2024
Accepted: March 1, 2024
Published online: June 9, 2024
Processing time: 208 Days and 11.1 Hours

Abstract

Diagnostic errors are prevalent in critical care practice and are associated with patient harm and costs for providers and the healthcare system. Patient complexity, illness severity, and the urgency in initiating proper treatment all contribute to decision-making errors. Clinician-related factors such as fatigue, cognitive overload, and inexperience further interfere with effective decision-making. Cognitive science has provided insight into the clinical decision-making process that can be used to reduce error. This evidence-based review discusses ten common misconceptions regarding critical care decision-making. By understanding how practitioners make clinical decisions and examining how errors occur, strategies may be developed and implemented to decrease errors in Decision-making and improve patient outcomes.

Key Words: Clinical reasoning, Cognitive bias, Critical care, Debiasing strategies decision making, Diagnostic reasoning, Diagnostic error, Heuristics, Medical knowledge, Patient safety

Core Tip: Diagnostic errors are prevalent in critical care practice and associated with patient harm. Cognitive science has provided insight into the clinical decision-making process. By understanding how practitioners make clinical decisions and examining how errors occur, strategies may be developed and implemented to decrease errors in decision making and improve patient outcomes.



INTRODUCTION

Decision-making in critical care presents challenges that result from significant patient complexity, illness severity, and the urgency of initiating proper treatment. The cognitive demands associated with these complexities, combined with clinician-related factors such as fatigue and inexperience, can lead to severe and systematic errors of reasoning. Commonly encountered misconceptions and ingrained practices further interfere with effective ICU decision-making. In this review, we discuss ten common misconceptions regarding decision-making in critical care. By understanding how practitioners make clinical decisions and examining how errors occur, strategies may be developed and implemented to decrease errors in decision-making and improve patient outcomes.

Misconception 1: Diagnostic errors resulting in adverse events are infrequent and have little impact on critically ill patients

Diagnostic error is common in clinical practice, occurring in 5%–20% of clinical encounters[1]. In the critically ill, diagnostic error and its consequences have traditionally been estimated from autopsies, a standard for diagnosis. A systematic review of 31 autopsy studies from 1988 through 2011 reported that 28% of patients admitted to intensive care unit (ICU) had at least one misdiagnosis identified at post-mortem. In 8% of cases, these misdiagnoses may have caused or contributed to the patient’s death[2]. Based on these studies, it has been estimated that more than 40000 critically ill patients die annually in the United States due to diagnostic errors[3]. Moreover, autopsy studies may underestimate the true prevalence because of the small percentage of postmortem examinations performed, as well as a failure to capture many nonfatal errors. Despite advances in diagnostic testing and an increased focus on patient safety and quality, recent analyses of autopsied patients demonstrate a persistence in error rates[4,5].

The consequences of diagnostic errors are substantial, resulting in permanent disability, death as well as prolonged hospital length of stay; harms are likely amplified in the ICU setting. In addition to morbidity and mortality, diagnostic errors can place a tremendous financial burden on the healthcare system. Diagnostic errors are common allegations in medical malpractice cases, and previous literature has examined indemnity payments from malpractice claims as a proxy for diagnostic error liability. In a large study analyzing 25 years of malpractice data consisting of 350706 paid claims, diagnostic errors were the leading type of claim (28.6%) and accounted for the highest proportion of total payout amount (35.2%)[6]. Diagnostic errors more often resulted in death than other allegation groups. Finally, diagnostic errors can lead to anger and mistrust of medical providers[7]. Moreover, critical care providers themselves may suffer from the "second victim" effect, experiencing loss of confidence, shame, or burnout after making diagnostic errors[8].

Misconception 2: Useful models for understanding clinical decision-making are lacking

Our current understanding of how physicians make decisions comes from cognitive psychology and is based on the "dual process theory"[9]. This theory describes two types of thinking (Table 1): "System 1" is intuitive, automatic, and fast, relying on pattern recognition and cognitive shortcuts (heuristics) to make quick judgments. "System 2" is slower and more deliberate, utilizing an analytical approach characterized by a search for additional information to minimize uncertainty[10]. Both types are essential in healthcare settings and are used depending on the situation. For example, System 1 may be used for obvious, straightforward cases that require quick action. In contrast, System 2 may be reserved for complex and atypical cases and situations requiring further reflection and reasoning, such as when the patient fails to respond to treatment. System 1 or System 2 processing may be used preferentially, simultaneously, or both in quick succession, depending on the clinical context. The use of System 1 and System 2 thinking is part of a cognitive continuum that relies on a variety of factors, including cognitive load and expertise of the clinician[11]. Interestingly, MRI studies of expert clinicians processing straightforward and ambiguous clinical cases have demonstrated differences in diagnostic accuracy, processing time, and neural connectivity between brain regions, providing a biological basis for dual processing theory[12,13].

Table 1 Comparison of system 1 and system 2 thinking.

System 1
System 2
Basis for decisionsHeuristics, Pattern RecognitionLogical, analytical
ActivationDefault systemActivated when needed (e.g., atypical or complex presentation)
SpeedEfficient, time sparingRigorous, time-consuming
Optimal useFamiliar situationsUnfamiliar or uncertain situations
Role of informationLimited information requiredInformation required to minimize uncertainty
Role of experienceRelies on prior training and experienceRelies on the pursuit of new knowledge/information
Efficiency and accuracy improve with experienceProvides useable tools for novices
LimitationsSusceptible to cognitive biases Accuracy is dependent on effort and time
Increases cognitive load
Not reliable for novicesLess useful in stressful events

A common misconception is that System 1 thinking, which relies on these mental shortcuts or "heuristics," is error-prone or less scientific, while System 2 is the "rational" or “voice of reason” approach[14]. In reality, heuristics are powerful tools in fast-paced medical settings like the ICU. System 1 decision-making only requires that the clinician focuses on a few salient features of a case rather than accounting for the totality of available information. In this way, experienced clinicians can make decisions more rapidly and efficiently than by using more analytical methods of decision-making[. It has been estimated that intensivists make over 100 significant medical decisions each day[15]. Trying to deeply analyze each of these decisions would be impractical and could lead to delays. Furthermore, although not infallible, heuristic thinking is often accurate. Research shows that when a correct diagnosis is considered within the first five minutes of an encounter, the eventual accuracy is as high as 98%; if not, accuracy decreases to 25%[16]. Since expert clinicians operate in pattern-recognition mode about 95% of the time, it is unsurprising that most error-reduction strategies focus on System 1 thinking[17]. Multiple studies have shown that, when used correctly, heuristic thinking can outperform more sophisticated decision-making tools[18].

Both types of thinking rely on a clinician's experience and knowledge. As clinicians gain experience, they become increasingly adept at using intuitive thinking effectively. However, both systems have their limitations, too. System 2's methodical approach, while comprehensive, can be slow and is vulnerable to its own kinds of errors. It can lead to second-guessing and overthinking due to a broader list of differential diagnoses and the incorporation of additional and often conflicting information and data that might contradict an initial, accurate judgment. Therefore, balancing these two modes of thinking, depending on the situation and factors like urgency and complexity, is crucial in medical decision-making[17].

Misconception 3: Most diagnostic errors are due to infrequent conditions and clinician inexperience

While encountering infrequent conditions can increase the risk of diagnostic error and its associated harms, clinicians most often misdiagnose common diseases. Due to their prevalence, common disease processes such as pulmonary embolism, myocardial infarction, and infections account for the greatest number of major misdiagnoses among critically ill adults[2]. Major cardiovascular conditions such as cardiac tamponade and aortic dissection, which are rapidly fatal but for which early diagnosis may be easily missed, are also among the most frequent errors identified in adult ICU patients postmortem. Intraabdominal catastrophes, such as bowel infarction, perforated viscera, ruptured aortic aneurysm, pancreatitis, and intraabdominal bleeding, round out the remaining category of major diagnoses missed. Undiagnosed malignancies are nearly as common as myocardial infarction and pulmonary embolism, but their contributions to ICU patient outcomes are less certain. Invasive aspergillosis (IA) is the fourth most fatal missed diagnosis in adult ICU patients, surpassed only by pulmonary embolism, myocardial infarction, and pneumonia. In a recent autopsy study, clinicians correctly identified IA as the cause of death in only slightly more than a quarter of fatal IA cases, even though nearly all patients studied had multiple risk factors[19]. Furthermore, common syndromes such as sepsis are particularly susceptible to misdiagnosis, especially when symptoms are vague, leading to significant diagnostic and therapeutic delays[20,21]. Similarly, patients with nonspecific complaints that present to the emergency department experience longer hospitalizations, a complication that is likely attributable to diagnostic uncertainty[22]. It is critical that clinicians prioritize ruling out common but deadly syndromes rather than fixating on rare conditions.

Studies suggest that while medical education improves diagnostic accuracy, the primary cause of diagnostic errors often isn't a lack of medical knowledge but instead lapses in clinical reasoning, as evidenced by an analysis of closed malpractice claims, which revealed that lapses in clinical reasoning were the leading cause of diagnostic errors in 73% of cases, while only 3% of these diagnostic errors were due to knowledge deficits[23]. An in-depth analysis of diagnostic errors resulting in patient harm identified through autopsy, quality assurance activities, and voluntary reports found that inadequate knowledge was an infrequent cause of the errors occurring in only 4% of cases[24]. Various cognitive biases, such as anchoring, premature closure, and confirmation bias, contribute significantly to these reasoning lapses. Table 2 describes common cognitive biases that can affect medical decision-making and further complicate the diagnostic process[25].

Table 2 Common cognitive biases encountered in critical care decision making.
Cognitive bias
Description
Anchoring bias Relying too heavily on the first information received when making decisions
Availability bias Judging a diagnosis as more likely if it quickly and readily comes to mind
Confirmation bias Selectively seeking information to support a diagnosis rather than information to refute it
Diagnostic momentum Attaching diagnostic labels and not reevaluating them
Dunning-Kruger effect The tendency for a novice to overestimate their skills
Framing effect Arriving at different conclusions depending on how the information is presented
Hindsight bias Interpreting past events as more predictable than they actually are
Premature closureFinalizing a diagnosis without full confirmation
Sunk cost bias Difficulty considering alternatives when time, effort, and energy are invested in a particular diagnosis

The clinical environment of the ICU is also an important factor contributing to errors in the diagnostic process. High patient acuity and census requiring many concurrent management decisions increase the intrinsic cognitive load in the ICU. Other environmental conditions, such as resource strain, time pressures, multitasking, and shift changes, can lead to errors. Clinical complexity (e.g., acuity of illness, the burden of comorbidities) and immunocompetency are clinical factors that increase the risk of experiencing a diagnostic error[26]. A patient's contextual complexity (e.g., English proficiency and health literacy) also increases the risk of medical errors[27]. There is a high burden of distraction in the ICU; one study found more than four distractions per physician per hour in the ICU, with more than two-thirds leading to a complete pause or abandonment of current activity. Such distraction can place extraneous strain on cognitive load[28]. Less obvious environmental factors, including the use of electronic medical records (EMRs), communication of care plans, procedural teaching, and donning and doffing of personal protective equipment, can be demanding and place additional strain on cognitive load[9]. In addition to environmental conditions, affective factors, including physician fatigue, mood, confidence, and experience, can all have a detrimental role in clinical reasoning.

Misconception 4: Advanced laboratory diagnostics have reduced the value of a thorough history and comprehensive physical exam

The time-honored approach to diagnosis consists of taking a patient history, performing a physical examination, developing a differential diagnosis, and obtaining appropriate laboratory tests and imaging studies. While this approach frequently leads to a correct diagnosis, in the ICU setting, the diagnostic process is often not so sequential, even when dealing with common disease processes[29]. Critically ill patients may be unconscious, delirious, sedated, and mechanically ventilated and are unable to provide a history of their illness, so we must rely on second or third-hand information. Furthermore, by the time a patient arrives in the ICU, they have usually received therapeutic interventions that alter their physiology, which can cloud the diagnosis. Critically ill patients often have multiple comorbidities or receive medications that can mask or alter signs of the underlying disease process. Diagnosis is further compounded by the demands of stabilizing patients on the verge of cardiopulmonary collapse and the heightened emotional state of patients, families, staff, and colleagues. Managing such complicated and critically ill patients represents a major challenge for ICU clinicians. The diagnostic process requires multiple cognitive steps and is fraught with potential errors at each stage.

Data gathering is the crucial first step of the complex circular decision-making process in which the data are integrated and interpreted to generate diagnoses. Success in this process can readily be derailed by faulty data gathering, inadequate knowledge, or flawed information processing. Despite the proliferation of advanced monitoring technologies and laboratory diagnostics, obtaining a thorough history and performing a comprehensive physical examination remains an essential cornerstone of data gathering. In one survey, half of clinicians caring for patients in the ICU considered physical examination to be of limited utility, and more than half of fellows or attendings reported “sometimes or never” examining their patients[30]. Low-quality bedside assessment and information gathering are primary determinants of poor diagnostic performance. A chart review of patients presenting with abdominal pain to the emergency department found diagnostic errors in 35% of high-risk cases, with 40% of the errors due to incomplete history taking[31]. A survey of physicians found that inadequacy of physical examination resulted in patient harm from missed, delayed, or incorrect diagnosis, unnecessary, lack of- or delayed treatment, unnecessary diagnostic cost, unnecessary exposure to radiation or contrast, and complications caused by treatments. In almost two-thirds of these cases, a physical examination was not performed at al[32]. While obtaining an incomplete history or performing an inadequate physical exam are not cognitive errors, failing to recognize the need to update the history or pursue further diagnostic information predisposes to cognitive biases such as premature closure and confirmation bias.

Prioritizing appropriate time to communicate with patients and their families is essential for accurate information gathering, but time with patients is increasingly limited in clinical practice. A study of ICU physicians found they spent only 14% of their time in patient rooms during a daily shift compared with 40% spent in the physician workroom[33]. The use of EMR technology, now a central component of patient care, may be a contributor to less time spent at the bedside[34]. Increasing the time spent with patients has additional benefits, including increasing job satisfaction and reducing clinician burnout. Reducing burnout is especially important for clinicians as it can compromise cognitive performance, contributing to diagnostic errors. It is time to reprioritize the importance of time spent with patients and families at the bedside.

Misconception 5: Decision-making errors are most effectively avoided by slowing down and trying harder

To successfully avoid errors in decision-making, clinicians must not only be aware of the risk of biases but also detect them in their own decision-making and successfully apply strategies to mitigate these risks. General-purpose directives to ‘be more careful’ or “slow down and be thorough” are often suggested to allow time for analytical reasoning in high-risk or unexpected situations. However, multiple studies of this technique have shown little benefit in improving cognitive performance by merely increasing the time spent on the task[35]. A number of targeted cognitive debiasing strategies have been proposed to reduce decision-making errors. Individual debiasing approaches apply context-specific rules to reduce flawed heuristic reasoning, while technological debiasing approaches use external aids to deliver information and reduce cognitive burden[. Metacognition, a predominant debiasing strategy, is the ability to self-monitor one's reasoning and identify and override heuristics when needed. To complicate matters, however, some research has shown that even after receiving training in biases, clinicians may not be reliably able to identify them when present in their own diagnostic process[37,38].

Decision-making approaches that utilize technological support may be more effective at reducing error than those that rely only on individual cognitive effort or the memory of a clinician. Evidence suggests that carefully designed decision aids such as checklists may facilitate access to previously acquired knowledge to improve decision-making. Three types of checklists have been proposed to reduce diagnostic error: (1) A general checklist that prompts clinicians to assess and optimize their cognitive approach; (2) a differential diagnosis checklist to help clinicians avoid premature closure; and (3) a checklist of common decision-making pitfalls and cognitive forcing functions[39]. Technological approaches that reduce cognitive load may also be beneficial, such as displaying medical information in easy-to-interpret formats or standardized order sets, which may be used for standard tasks.

A paternalistic approach to patient care has been associated with a 10% greater incidence of diagnostic error, and therefore, multidisciplinary teamwork has been the recommended approach to improve decision-making[40]. A multidisciplinary approach may benefit decision-making by utilizing the collective wisdom of the team[41]. An optimal multidisciplinary approach creates a culture of collaboration and encourages participation and feedback; the greater diversity of perspectives is likely to decrease diagnostic errors[42]. In addition, distributing appropriate tasks to members of the care team can optimize subspecialty expertise and reduce cognitive load. For example, pharmacists are ideally suited to assist with dosing for deep vein thrombosis prophylaxis and monitoring antimicrobial levels. Incorporating patient’s family members during rounds improves care through shared decision-making[43]. Inspired by preprocedural “time-outs," a “diagnostic pause” has been suggested to force clinicians to fully evaluate all available information, consider diagnostic alternatives, and ultimately change course if warranted[39].

Misconception 6: Debiasing strategies are most effective for novice practitioners

Both novices and experienced clinicians are susceptible to decision-making errors due to cognitive bias, but the impact of debiasing strategies on these groups may differ. Early studies demonstrated limited benefits of debiasing training for medical students, likely due to knowledge deficits inherent in early-stage learners[37,38]. Furthermore, students may not have enough experience to utilize heuristics, leading them to fall victim to cognitive biases. Therefore, although novices are not immune to cognitive bias, they may not regularly employ heuristics and consequently may not immediately benefit from instruction in debiasing strategies. As students transition to postgraduate training, they acquire additional knowledge and clinical experience and are more likely to use heuristics in problem-solving. At this stage, they also become more prone to cognitive biases and will more likely benefit from debiasing strategies.

Junior residents may try to employ heuristics in an effort to be efficient but are likely to be more prone to diagnostic errors than experienced clinicians due to inadequate knowledge to determine when heuristics fail. Some evidence suggests that trainees may often be overly confident in their diagnoses compared to attending physicians. This state of "diagnostic hubris," in which a clinician is confident of their diagnosis but it is inaccurate, may also result in failure to switch from System 1 to System 2 thinking when it is appropriate. Evidence supports that experience allows clinicians to evaluate their diagnostic certainty more reliably. With specialized content-relevant training and feedback on diagnostic decisions, novice clinicians learn to better align their diagnostic confidence and accuracy[44].

Misconception 7: A robust body of evidence exists for the effectiveness of debiasing strategies in clinical decision-making

Despite strategies for teaching and employing cognitive debiasing techniques, debate has existed about the extent to which such approaches are effective[45]. Several recent systematic and scoping reviews of debiasing interventions show promise for improvement of diagnostic accuracy, but the results of studies are heterogeneous[46-48]. Importantly, none of the studies examined decision-making in a clinical setting, and no studies reported long-term follow-up. Thus, the applicability and effectiveness of studied interventions in clinical practice is unclear.

Understanding limitations in the design of existing studies for evaluating debiasing techniques is essential for interpreting the effectiveness of interventions. Most studies to date have examined the impact of debiasing interventions on novice clinicians (students or residents), leaving the question of the effectiveness of these strategies for practicing clinicians unanswered. Studies evaluating the impact of debiasing strategies must also account for different types of decisions. In daily clinical practice, most cases are “typical,” with an easily recognizable diagnosis[49]. For these cases, using System 1 thinking is quick, accurate, and appropriate. Studying how clinicians make diagnostic decisions in classic patient presentations is not useful in discriminating between the use of System 1 and System 2 reasoning and will not help evaluate the impact of a debiasing approach. Rather, to detect cognitive bias and the impact of debiasing techniques, what must be examined is how clinicians make the diagnosis in uncommon conditions, in an unusual presentation of a common illness, or in the presence of unique patient comorbidities and characteristics. Finally, existing decision-making studies often focus on a single decision event and a fixed set of alternatives in a stable environment. Real-world clinical decision-making often requires multiple sequential decisions under dynamic conditions. Decision-making in clinical settings also differs substantially from experimental settings in that it is mediated by teamwork and technology. Thus, traditional experimental and actual clinical decision-making are markedly different phenomena. Overall, more research is needed to evaluate the effectiveness of debiasing interventions in the real clinical environment with participants having different levels of knowledge or experience. Such studies should evaluate the long-term retention of debiasing interventions and investigate the rationale for ineffectiveness and low intervention usage.

Misconception 8: Valid methods for assessment of clinical decision-making are lacking

Because critical thinking underlies mastery across clinical competency domains, it has been referred to as a “meta-competency”[50]. This meta-competency underlies “entrustable professional activities” for trainees, which define the abilities to effectively care for patients without supervision[51]. Accrediting organizations have recognized the vital role of clinical reasoning in medical education, so valid methods for assessing this critical skill are required. Many clinical reasoning assessments have been described, which may be subdivided into workplace (WBA) and non-workplace-based assessments (non-WBA), each with its own strengths and limitations. The feasibility and validity of the different assessments must be considered when selecting among methods[52].

WBAs are critically important as they evaluate clinical reasoning in actual practice and warrant an increasing role in current competency-based educational programs. Examples of WBAs include direct observation, global assessments, oral case presentations, written note assessments, chart-stimulated recall, think-aloud, and self-regulated learning microanalysis. WBA approaches to assessing clinical reasoning are often limited by the breadth of content and context specificity, as well as by issues of feasibility and cost, which often restrict the number and variety of cases that can be evaluated. Non-WBAs such as multiple-choice questions, simulation exercises, and oral examinations have the advantage of providing broad coverage of clinical topics and allowing standardization. Thus, for validity and feasibility reasons, it is critical to combine WBAs and non-WBAs in any comprehensive assessment program. Performing frequent and varied assessments of clinical reasoning and collecting information from multiple sources longitudinally are critical to ensure a successful assessment program as well as patient safety.

A coordinated and integrated approach to assessment allows feedback and practice so that trainees have the information they need to advance their learning and ensure that they are ready to advance to the next stage in their training. Consensus-based milestones in critical thinking have been published to assess the development of these skills for clinicians in training and to define the expected progress over time. This milestone model recognizes that critical reasoning development is not strictly tied to the level of training, although advanced knowledge and experience are often associated with more mature critical thinking skills. Elements of the milestones at each stage are divided into metacognitive abilities, attitudes toward critical thinking, and cognitive skills. By integrating methods for assessment and feedback, educators can both support learning and help trainees develop the strategies needed to succeed in residency and clinical practice.

Misconception 9: Clinical decision support systems based on artificial intelligence obviate the need for diagnostic reasoning

Clinical decision support systems (CDSS) provide information at the point of care to guide clinical decision-making. The ICU setting is especially well-suited for CDSS because of high clinical acuity, complexity, time pressure, and the large amount of available data[53]. Increasingly, CDSS utilize prediction models to calculate the likelihood of a clinical outcome or risk on the basis of some set of clinical input variables, but their interpretation requires conceptualizing a given diagnosis as more or less likely while acknowledging an explicit degree of uncertainty. CDSS often utilize artificial intelligence (AI) systems work to allow computer systems to perform tasks that would otherwise require human intelligence by processing large amounts of “training” data and analyzing the data to identify associations and patterns. In the ICU environment, such systems have shown promise in improving decision-making through the early detection of clinical conditions such as sepsis, acute kidney injury, and acute respiratory distress syndrome[54-56].

Both CDSS and AI systems have limitations that must be accounted for in order to be used effectively. CDSS algorithms can take many, but not all, relevant patient factors into account, resulting in nuanced probabilities to estimate risk. However, it is important to recognize that algorithmic risk predictions change with the context. Only clinicians can decide whether a given risk is acceptable; algorithms cannot entirely replace clinical judgment[57]. The probabilistic understanding required to properly utilize CDSS will require that practitioners learn to appropriately apply probabilistic reasoning and clinical uncertainty. This probabilistic education should begin in medical school and continue during postgraduate training, integrating probabilistic training into existing case studies and practice-based learning[57]. Because probabilistic thinking is foundational to the practice of evidence-based medicine, improving clinicians’ instruction in these concepts offers benefits beyond CDSS[58].

While AI has the potential to improve clinical decision-making, it is essential that users are aware of its potential pitfalls and biases. These biases can result from deficiencies in the training data or the inappropriate application of a trained AI system to an unanticipated patient context. If AI systems are trained with data lacking diversity of underlying patient populations or curated from restricted clinical settings, it can severely limit the generalizability and yield biased AI-based results[59]. As a consequence, patient populations in data-rich regions stand to benefit substantially more through better prediction than patients from data-poor regions, perpetuating existing healthcare disparities[60]. A selection bias may also result from high-quality data being replaced with 'interesting cases,' where AI prediction performance will be compromised. AI performance can also be negatively impacted when the outcome variable of interest is poorly defined or inconsistently assigned by experts (e.g., pneumonia), resulting in a training data set with no ‘ground truth’ to learn associations[61]. Over time, disease patterns can change, leading to a "distributional shift," a mismatch between training and operational data[61]. AI systems are often poor at recognizing a relevant change in context or data, and this results in the system making erroneous predictions based on 'out-of-sample' inputs. A mismatch between training and operational data can also occur by inappropriate application of an AI system to an unanticipated patient context, which is referred to as the "frame problem"[62]. Such inappropriate application of AI may lead to hazardous effects that can be both difficult and costly to diagnose and fix.

A significant concern when humans are assisted by AI systems is that they will over-rely on the technology rather than continue to be vigilant, which is referred to as automation bias. Automation bias can have disastrous consequences when AI systems are wrong or fail, and it is likely to be exacerbated by the “black box” nature of many AI algorithms[63]. Methods are needed to detect and mitigate automation bias; for example, by improving the interpretability of AI algorithms, explaining how an AI reached a conclusion, and training clinicians to remain vigilant. Given the potential harms associated with AI use in medical decision-making, the United States government has begun taking steps to ensure that AI-based algorithms are safe and effective for clinical use. The Food and Drug Administration is now regulating many AI-based clinical decision support algorithms as medical devices, and the Department of Health and Human Services has proposed to regulate bias in clinical algorithms under healthcare anti-discrimination laws[64]. Additionally, a number of multi-stakeholder, consensus-based reporting guidelines have been developed to improve the reporting of early clinical evaluation of AI research (CONSORT-AI, SPIRIT-AI, DECIDE-AI)[65-67]. An integrated approach to decision-making between clinicians and AI systems, when used appropriately, may provide the best possible outcome for patients by combining the unique advantages of AI pattern recognition and human contextual interpretation.

Misconception 10: Peer review is best used to identify medical errors and assign responsibility

Historically, peer review conferences (PRC), also referred to as morbidity and mortality (M&M) conferences, have sought to improve patient care by identifying the sources of clinical error but all too frequently have taken a “shame and blame” approach[68]. PRC structure may pose additional barriers to the goal of improving patient safety based on the types of cases reviewed and the priorities of focus and skills of the facilitators. PRCs often focus on pathophysiology and do not lead the participants through actionable approaches to prevent future errors. Furthermore, PRCs that prioritize unique presentations or rare conditions detract from the goal of reducing more common sources of error. Cases referred for PRC often result in serious harm, which biases the perspective from which the case is reviewed. Finally, PRC participants may be readily able to identify clinical errors but do not always have the skills to acknowledge or examine the deeper systemic origins of the problems[69]. To complicate matters further, clinicians are often reluctant to acknowledge their decision-making errors, which may result in defensive attitudes or avoidance of conferences. There exists a need to improve on the existing PRC structure, employing systematic approaches to support, enable, and ensure that constructive reflection about errors can occur[70].

Suggested strategies for improving PRCs include changes in culture, communication, and education to create a safer environment for admitting diagnostic uncertainty and acknowledging errors, as well as implementation of a systems-based approach to investigate, intervene, and provide feedback on improvement efforts. A cultural change is needed in which diagnostic uncertainty is embraced as the hallmark of a sophisticated approach to clinical decision-making, rather than as reflecting incompetence or ignorance[49]. Small changes in communication during PRC may help foster a safer environment for admitting diagnostic uncertainty and acknowledging errors. Changing the term “diagnostic errors” to “missed diagnostic opportunities” may help depersonalize and destigmatize these errors[71]. Relabeling "differential diagnosis" as "diagnostic hypotheses" to more fully reflect clinical uncertainty may encourage further evaluation and potentially make it more likely to reach conclusions as a clinical scenario evolves.

Careful selection of cases to discuss and reflect on is a means to maximize the educational benefit of the PRC. Although it might be ideal to review all cases, it is typically impractical given the high clinical volume. Instead, cases should be prioritized based on those in which substantive learning is most likely to occur and, thus, clinical practice most likely to be improved. These cases are often those in which a missed diagnostic opportunity has occurred. These missed opportunities can occur in the decision-making process with or without an associated negative outcome. Furthermore, it is important not to allow a poor outcome to bias the ability to assess its preventability[72]. Good diagnostic thinking does not always lead to a good clinical outcome, nor does flawed diagnostic thinking always result in poor clinical outcomes. PRC should also seek to include examples of diagnostic excellence so that clinicians also benefit by learning from them.

Maximizing the value of the PRC requires both recognizing the decisions and errors involved and reflecting on them. Evaluation of clinical cases should move away from the single-dimensional approach of assigning individual fault and toward recognition of the multiplicity of factors that contributed to diagnostic error and the ultimate outcome. Recognizing system-related factors that contribute to errors is essential for practice improvement. This “systems approach” is based on the theory that medical errors do not result from bad clinicians but from systems of work that are poorly configured to support human activity[73]. Dedicated, formalized, and collaborative approaches to investigating the events leading to medical errors are essential to gain insight into how and why the errors occurred. Similarly, examining the impact of errors on patients and the clinicians involved humanizes the issue for PRC participants. Recognizing errors without reflecting on their impact can make the review process seem like an administrative exercise rather than an important opportunity for learning and improvement[74]. The reflection should involve providing feedback to clinicians on their decision-making. If clinicians become accustomed to receiving regular constructive feedback, it will help diminish the disciplinary perception of peer review and increase the acceptance of the feedback to make individual and system changes. Transitioning from a “no news is good news” system to a system where decision-making feedback is expected will foster a culture of ongoing improvement. In this way, improving diagnosis involves not only avoiding errors but also aiming for excellence.

CONCLUSION

Diagnostic errors are prevalent in the ICU and are associated with patient harm and costs for providers and the healthcare system. Cognitive science has provided insight into the clinical decision-making process that can be used to reduce error. Systems of decision-making are often effective and efficient, but errors can occur frequently with atypical presentations of commonly occurring conditions. History taking and physical examination remain central to the process of diagnostic reasoning and have not been supplanted by advanced monitoring and laboratory diagnostics. While gaps in medical knowledge can result in decision-making errors, lapses in clinical reasoning appear to play a more prominent role. Understanding the role of cognitive biases is important, but awareness alone without employing error-reduction strategies is unlikely to improve decision-making. Individual error reduction strategies based on metacognition have gained prominence, but strong evidence demonstrating their benefit is lacking at present. Utilizing a variety of assessment tools for decision-making together with developmental milestones is important to support learning and help trainees develop the strategies needed to succeed in clinical practice. CDSS using AI algorithms holds promise for improving decision-making, but understanding the limitations and potential biases in such systems is essential to prevent flawed predictions. Finally, cultural changes in which diagnostic uncertainty is embraced as the hallmark of a sophisticated approach to clinical decision-making, rather than a reflection of incompetence or ignorance, are likely to improve patient outcomes (Table 3).

Table 3 Ten misconceptions and realities for understanding and improving critical care decision-making.
Misconception
Reality
Diagnostic errors resulting in adverse events are infrequent and of little impact on critically ill patients Diagnostic errors are prevalent and associated with significant patient harm and cost
Useful models for understanding clinical decision-making are lacking Cognitive science has provided insight into clinical decision-making that can be used to reduce error
Most diagnostic errors are due to infrequent conditions and clinician inexperienceDiagnostic errors occur most frequently with atypical presentations of commonly occurring conditions
Advanced laboratory diagnostics have reduced the value of a thorough history and comprehensive physical exam History taking and physical examination remain central to the process of diagnostic reasoning
Decision-making errors are most effectively avoided by slowing down and trying harderGeneral-purpose directives to 'try harder' or "slow down and be thorough" are often suggested to allow time for analytical reasoning, but multiple studies of this technique have shown little benefit in improving cognitive performance
Debiasing strategies are most effective for novice practitionersStudies suggest limited benefits of debiasing training for novice practitioners since they often do not have enough experience to utilize heuristics, leading them to fall victim to cognitive biases. As novice practitioners acquire additional knowledge and clinical experience, they are more likely to use heuristics and will more likely benefit from debiasing strategies
A robust body of evidence exists for the effectiveness of debiasing strategies in clinical decision-making
Recent reviews of debiasing interventions show promise for improving diagnostic accuracy but demonstrated benefit in clinical practice is currently lacking
Valid methods for assessment of decision-making are lackingMany clinical reasoning assessments have been each with their strengths and limitations. Utilizing a variety of assessment tools for decision-making together with developmental milestones is essential to support learning
Clinical decision support systems based on artificial intelligence obviate the need for diagnostic reasoningAI algorithms hold promise for improving decision-making, but understanding the potential biases in such systems is essential. An integrated approach combining the unique advantages of AI pattern recognition and human contextual interpretation will likely result in the best patient outcomes
Peer review is best used to identify medical errors and assign responsibilityMaximizing the value of the PRC requires both recognizing the decisions and errors involved and reflecting on them. Evaluation of clinical cases should move away from the single-dimensional approach of assigning individual fault and toward recognizing the multiplicity of factors that contribute to diagnostic error and the ultimate outcome
Footnotes

Provenance and peer review: Invited article; Externally peer reviewed.

Peer-review model: Single blind

Specialty type: Critical care medicine

Country/Territory of origin: United States

Peer-review report’s scientific quality classification

Grade A (Excellent): 0

Grade B (Very good): B

Grade C (Good): 0

Grade D (Fair): 0

Grade E (Poor): 0

P-Reviewer: Gazdag G, Hungary S-Editor: Liu JH L-Editor: A P-Editor: Cai YX

References
1.  National Academies of Sciences, Engineering, and Medicine.   Improving diagnosis in health care. Washington, DC: The National Academies Press; 2015.  [PubMed]  [DOI]  [Cited in This Article: ]
2.  Winters B, Custer J, Galvagno SM Jr, Colantuoni E, Kapoor SG, Lee H, Goode V, Robinson K, Nakhasi A, Pronovost P, Newman-Toker D. Diagnostic errors in the intensive care unit: a systematic review of autopsy studies. BMJ Qual Saf. 2012;21:894-902.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 175]  [Cited by in F6Publishing: 162]  [Article Influence: 13.5]  [Reference Citation Analysis (34)]
3.  Newman-Toker DE, Pronovost PJ. Diagnostic errors--the next frontier for patient safety. JAMA. 2009;301:1060-1062.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 260]  [Cited by in F6Publishing: 255]  [Article Influence: 17.0]  [Reference Citation Analysis (0)]
4.  Tejerina EE, Padilla R, Abril E, Frutos-Vivar F, Ballen A, Rodríguez-Barbero JM, Lorente JÁ, Esteban A. Autopsy-detected diagnostic errors over time in the intensive care unit. Hum Pathol. 2018;76:85-90.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 19]  [Cited by in F6Publishing: 23]  [Article Influence: 3.8]  [Reference Citation Analysis (0)]
5.  Rusu S, Lavis P, Domingues Salgado V, Van Craynest MP, Creteur J, Salmon I, Brasseur A, Remmelink M. Comparison of antemortem clinical diagnosis and post-mortem findings in intensive care unit patients. Virchows Arch. 2021;479:385-392.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5]  [Cited by in F6Publishing: 5]  [Article Influence: 1.7]  [Reference Citation Analysis (0)]
6.  Saber Tehrani AS, Lee H, Mathews SC, Shore A, Makary MA, Pronovost PJ, Newman-Toker DE. 25-Year summary of US malpractice claims for diagnostic errors 1986-2010: an analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013;22:672-680.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 180]  [Cited by in F6Publishing: 205]  [Article Influence: 18.6]  [Reference Citation Analysis (0)]
7.  Oguro N, Suzuki R, Yajima N, Sakurai K, Wakita T, Hall MA, Kurita N. The impact that family members' health care experiences have on patients' trust in physicians. BMC Health Serv Res. 2021;21:1122.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 4]  [Cited by in F6Publishing: 11]  [Article Influence: 3.7]  [Reference Citation Analysis (0)]
8.  Wu AW. Medical error: the second victim. The doctor who makes the mistake needs help too. BMJ. 2000;320:726-727.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 747]  [Cited by in F6Publishing: 726]  [Article Influence: 30.3]  [Reference Citation Analysis (0)]
9.  Harris E, Santhosh L. Dual Process Theory and Cognitive Load: How Intensivists Make Diagnoses. Crit Care Clin. 2022;38:27-36.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 2]  [Reference Citation Analysis (0)]
10.  Tay SW, Ryan P, Ryan CA. Systems 1 and 2 thinking processes and cognitive reflection testing in medical students. Can Med Educ J. 2016;7:e97-e103.  [PubMed]  [DOI]  [Cited in This Article: ]
11.  Djulbegovic B, Hozo I, Beckstead J, Tsalatsanis A, Pauker SG. Dual processing model of medical decision-making. BMC Med Inform Decis Mak. 2012;12:94.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 80]  [Cited by in F6Publishing: 67]  [Article Influence: 5.6]  [Reference Citation Analysis (0)]
12.  van den Berg B, de Bruin ABH, Marsman JC, Lorist MM, Schmidt HG, Aleman A, Snoek JW. Thinking fast or slow? Functional magnetic resonance imaging reveals stronger connectivity when experienced neurologists diagnose ambiguous cases. Brain Commun. 2020;2:fcaa023.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 2]  [Article Influence: 0.5]  [Reference Citation Analysis (0)]
13.  Durning SJ, Costanzo ME, Beckman TJ, Artino AR Jr, Roy MJ, van der Vleuten C, Holmboe ES, Lipner RS, Schuwirth L. Functional neuroimaging correlates of thinking flexibility and knowledge structure in memory: Exploring the relationships between clinical reasoning and diagnostic thinking. Med Teach. 2016;38:570-577.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 14]  [Cited by in F6Publishing: 7]  [Article Influence: 0.9]  [Reference Citation Analysis (0)]
14.  Mangus CW, Mahajan P. Decision Making: Healthy Heuristics and Betraying Biases. Crit Care Clin. 2022;38:37-49.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
15.  McKenzie MS, Auriemma CL, Olenik J, Cooney E, Gabler NB, Halpern SD. An Observational Study of Decision Making by Medical Intensivists. Crit Care Med. 2015;43:1660-1668.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 50]  [Cited by in F6Publishing: 47]  [Article Influence: 5.2]  [Reference Citation Analysis (0)]
16.  Barrows HS, Norman GR, Neufeld VR, Feightner JW. The clinical reasoning of randomly selected physicians in general medical practice. Clin Invest Med. 1982;5:49-55.  [PubMed]  [DOI]  [Cited in This Article: ]
17.  Croskerry P, Singhal G, Mamede S. Cognitive debiasing 1: origins of bias and theory of debiasing. BMJ Qual Saf. 2013;22 Suppl 2:ii58-ii64.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 263]  [Cited by in F6Publishing: 250]  [Article Influence: 22.7]  [Reference Citation Analysis (0)]
18.  Marewski JN, Gigerenzer G. Heuristic decision making in medicine. Dialogues Clin Neurosci. 2012;14:77-89.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 172]  [Cited by in F6Publishing: 173]  [Article Influence: 15.7]  [Reference Citation Analysis (0)]
19.  Mudrakola HV, Tandon YK, DeMartino E, Tosh PK, Yi ES, Ryu JH. Autopsy study of fatal invasive pulmonary aspergillosis: Often undiagnosed premortem. Respir Med. 2022;199:106882.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
20.  Bergl PA, Taneja A, El-Kareh R, Singh H, Nanchal RS. Frequency, Risk Factors, Causes, and Consequences of Diagnostic Errors in Critically Ill Medical Patients: A Retrospective Cohort Study. Crit Care Med. 2019;47:e902-e910.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 20]  [Cited by in F6Publishing: 25]  [Article Influence: 6.3]  [Reference Citation Analysis (0)]
21.  Filbin MR, Lynch J, Gillingham TD, Thorsen JE, Pasakarnis CL, Nepal S, Matsushima M, Rhee C, Heldt T, Reisner AT. Presenting Symptoms Independently Predict Mortality in Septic Shock: Importance of a Previously Unmeasured Confounder. Crit Care Med. 2018;46:1592-1599.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 75]  [Cited by in F6Publishing: 80]  [Article Influence: 13.3]  [Reference Citation Analysis (0)]
22.  Sauter TC, Capaldo G, Hoffmann M, Birrenbach T, Hautz SC, Kämmer JE, Exadaktylos AK, Hautz WE. Non-specific complaints at emergency department presentation result in unclear diagnoses and lengthened hospitalization: a prospective observational study. Scand J Trauma Resusc Emerg Med. 2018;26:60.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 22]  [Cited by in F6Publishing: 25]  [Article Influence: 4.2]  [Reference Citation Analysis (0)]
23.  CRICO  Malpractice risks in the diagnostic process: 2014 CRICO strategies national CBS report. Published 2014. Available from: https://www.rmf.harvard.edu/Malpractice-Data/Annual-Benchmark-Reports/Risks-in-the-Diagnostic-Process.  [PubMed]  [DOI]  [Cited in This Article: ]
24.  Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165:1493-1499.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 835]  [Cited by in F6Publishing: 822]  [Article Influence: 43.3]  [Reference Citation Analysis (0)]
25.  Hughes TM, Dossett LA, Hawley ST, Telem DA. Recognizing Heuristics and Bias in Clinical Decision-making. Ann Surg. 2020;271:813-814.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 13]  [Cited by in F6Publishing: 15]  [Article Influence: 3.8]  [Reference Citation Analysis (0)]
26.  Bergl PA, Zhou Y. Diagnostic Error in the Critically Ill: A Hidden Epidemic? Crit Care Clin. 2022;38:11-25.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1]  [Cited by in F6Publishing: 3]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
27.  Konopasky A, Artino AR, Battista A, Ohmer M, Hemmer PA, Torre D, Ramani D, van Merrienboer J, Teunissen PW, McBee E, Ratcliffe T, Durning SJ. Understanding context specificity: the effect of contextual factors on clinical reasoning. Diagnosis (Berl). 2020;7:257-264.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 24]  [Cited by in F6Publishing: 27]  [Article Influence: 6.8]  [Reference Citation Analysis (0)]
28.  See KC, Phua J, Mukhopadhyay A, Lim TK. Characteristics of distractions in the intensive care unit: how serious are they and who are at risk? Singapore Med J. 2014;55:358-362.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 7]  [Article Influence: 0.8]  [Reference Citation Analysis (0)]
29.  Pisciotta W, Arina P, Hofmaenner D, Singer M. Difficult diagnosis in the ICU: making the right call but beware uncertainty and bias. Anaesthesia. 2023;78:501-509.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1]  [Reference Citation Analysis (0)]
30.  Vazquez R, Vazquez Guillamet C, Adeel Rishi M, Florindez J, Dhawan PS, Allen SE, Manthous CA, Lighthall G. Physical examination in the intensive care unit: opinions of physicians at three teaching hospitals. Southwest J Pulm Crit Care. 2015;10:34-43.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 6]  [Article Influence: 0.7]  [Reference Citation Analysis (0)]
31.  Medford-Davis L, Park E, Shlamovitz G, Suliburk J, Meyer AN, Singh H. Diagnostic errors related to acute abdominal pain in the emergency department. Emerg Med J. 2016;33:253-259.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 40]  [Cited by in F6Publishing: 41]  [Article Influence: 4.6]  [Reference Citation Analysis (0)]
32.  Verghese A, Charlton B, Kassirer JP, Ramsey M, Ioannidis JP. Inadequacies of Physical Examination as a Cause of Medical Errors and Adverse Events: A Collection of Vignettes. Am J Med. 2015;128:1322-4.e3.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 119]  [Cited by in F6Publishing: 123]  [Article Influence: 13.7]  [Reference Citation Analysis (0)]
33.  Butler R, Monsalve M, Thomas GW, Herman T, Segre AM, Polgreen PM, Suneja M. Estimating Time Physicians and Other Health Care Workers Spend with Patients in an Intensive Care Unit Using a Sensor Network. Am J Med. 2018;131:972.e9-972.e15.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 83]  [Cited by in F6Publishing: 104]  [Article Influence: 17.3]  [Reference Citation Analysis (0)]
34.  Joukes E, Abu-Hanna A, Cornet R, de Keizer NF. Time Spent on Dedicated Patient Care and Documentation Tasks Before and After the Introduction of a Structured and Standardized Electronic Health Record. Appl Clin Inform. 2018;9:46-53.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 45]  [Cited by in F6Publishing: 50]  [Article Influence: 8.3]  [Reference Citation Analysis (0)]
35.  Norman GR, Monteiro SD, Sherbino J, Ilgen JS, Schmidt HG, Mamede S. The Causes of Errors in Clinical Reasoning: Cognitive Biases, Knowledge Deficits, and Dual Process Thinking. Acad Med. 2017;92:23-30.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 285]  [Cited by in F6Publishing: 269]  [Article Influence: 38.4]  [Reference Citation Analysis (0)]
36.  Hartigan S, Brooks M, Hartley S, Miller RE, Santen SA, Hemphill RR. Review of the Basics of Cognitive Error in Emergency Medicine: Still No Easy Answers. West J Emerg Med. 2020;21:125-131.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 7]  [Article Influence: 1.8]  [Reference Citation Analysis (0)]
37.  Sherbino J, Yip S, Dore KL, Siu E, Norman GR. The effectiveness of cognitive forcing strategies to decrease diagnostic error: an exploratory study. Teach Learn Med. 2011;23:78-84.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 57]  [Cited by in F6Publishing: 36]  [Article Influence: 2.8]  [Reference Citation Analysis (0)]
38.  Sherbino J, Kulasegaram K, Howey E, Norman G. Ineffectiveness of cognitive forcing strategies to reduce biases in diagnostic reasoning: a controlled trial. CJEM. 2014;16:34-40.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 63]  [Cited by in F6Publishing: 66]  [Article Influence: 7.3]  [Reference Citation Analysis (0)]
39.  Ely JW, Graber ML, Croskerry P. Checklists to reduce diagnostic errors. Acad Med. 2011;86:307-313.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 228]  [Cited by in F6Publishing: 177]  [Article Influence: 13.6]  [Reference Citation Analysis (0)]
40.   Improving Diagnosis in Health Care. Washington (DC): National Academies Press (US); 2015-Dec-29 .  [PubMed]  [DOI]  [Cited in This Article: ]
41.  Radcliffe K, Lyson HC, Barr-Walker J, Sarkar U. Collective intelligence in medical decision-making: a systematic scoping review. BMC Med Inform Decis Mak. 2019;19:158.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 18]  [Cited by in F6Publishing: 18]  [Article Influence: 3.6]  [Reference Citation Analysis (0)]
42.  Bergl PA, Nanchal RS, Singh H. Diagnostic Error in the Critically III: Defining the Problem and Exploring Next Steps to Advance Intensive Care Unit Safety. Ann Am Thorac Soc. 2018;15:903-907.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 19]  [Cited by in F6Publishing: 20]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
43.  Michalsen A, Long AC, DeKeyser Ganz F, White DB, Jensen HI, Metaxa V, Hartog CS, Latour JM, Truog RD, Kesecioglu J, Mahn AR, Curtis JR. Interprofessional Shared Decision-Making in the ICU: A Systematic Review and Recommendations From an Expert Panel. Crit Care Med. 2019;47:1258-1266.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 59]  [Cited by in F6Publishing: 78]  [Article Influence: 19.5]  [Reference Citation Analysis (0)]
44.  Meyer AND, Singh H. The Path to Diagnostic Excellence Includes Feedback to Calibrate How Clinicians Think. JAMA. 2019;321:737-738.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 36]  [Cited by in F6Publishing: 39]  [Article Influence: 7.8]  [Reference Citation Analysis (0)]
45.  O'Sullivan ED, Schofield SJ. Cognitive bias in clinical medicine. J R Coll Physicians Edinb. 2018;48:225-232.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 110]  [Cited by in F6Publishing: 129]  [Article Influence: 25.8]  [Reference Citation Analysis (0)]
46.  Tung A, Melchiorre M. Debiasing and Educational Interventions in Medical Diagnosis: A Systematic Review. UTMJ. 2023;100.  [PubMed]  [DOI]  [Cited in This Article: ]
47.  Griffith PB, Doherty C, Smeltzer SC, Mariani B. Education initiatives in cognitive debiasing to improve diagnostic accuracy in student providers: A scoping review. J Am Assoc Nurse Pract. 2020;33:862-871.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5]  [Cited by in F6Publishing: 4]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
48.  Ludolph R, Schulz PJ. Debiasing Health-Related Judgments and Decision Making: A Systematic Review. Med Decis Making. 2018;38:3-13.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 60]  [Cited by in F6Publishing: 58]  [Article Influence: 8.3]  [Reference Citation Analysis (0)]
49.  Royce CS, Hayes MM, Schwartzstein RM. Teaching Critical Thinking: A Case for Instruction in Cognitive Biases to Reduce Diagnostic Errors and Improve Patient Safety. Acad Med. 2019;94:187-194.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 98]  [Cited by in F6Publishing: 89]  [Article Influence: 17.8]  [Reference Citation Analysis (0)]
50.  Papp KK, Huang GC, Lauzon Clabo LM, Delva D, Fischer M, Konopasek L, Schwartzstein RM, Gusic M. Milestones of critical thinking: a developmental model for medicine and nursing. Acad Med. 2014;89:715-720.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 73]  [Cited by in F6Publishing: 31]  [Article Influence: 3.1]  [Reference Citation Analysis (0)]
51.  Shorey S, Lau TC, Lau ST, Ang E. Entrustable professional activities in health care education: a scoping review. Med Educ. 2019;53:766-777.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 91]  [Cited by in F6Publishing: 114]  [Article Influence: 22.8]  [Reference Citation Analysis (0)]
52.  Daniel M, Rencic J, Durning SJ, Holmboe E, Santen SA, Lang V, Ratcliffe T, Gordon D, Heist B, Lubarsky S, Estrada CA, Ballard T, Artino AR Jr, Sergio Da Silva A, Cleary T, Stojan J, Gruppen LD. Clinical Reasoning Assessment Methods: A Scoping Review and Practical Guidance. Acad Med. 2019;94:902-912.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 90]  [Cited by in F6Publishing: 121]  [Article Influence: 24.2]  [Reference Citation Analysis (0)]
53.  El-Kareh R, Sittig DF. Enhancing Diagnosis Through Technology: Decision Support, Artificial Intelligence, and Beyond. Crit Care Clin. 2022;38:129-139.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1]  [Cited by in F6Publishing: 3]  [Article Influence: 1.5]  [Reference Citation Analysis (0)]
54.  Lovejoy CA, Buch V, Maruthappu M. Artificial intelligence in the intensive care unit. Crit Care. 2019;23:7.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 25]  [Cited by in F6Publishing: 25]  [Article Influence: 5.0]  [Reference Citation Analysis (0)]
55.  Gameiro J, Branco T, Lopes JA. Artificial Intelligence in Acute Kidney Injury Risk Prediction. J Clin Med. 2020;9.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 32]  [Cited by in F6Publishing: 36]  [Article Influence: 9.0]  [Reference Citation Analysis (0)]
56.  Pai KC, Chao WC, Huang YL, Sheu RK, Chen LC, Wang MS, Lin SH, Yu YY, Wu CL, Chan MC. Artificial intelligence-aided diagnosis model for acute respiratory distress syndrome combining clinical data and chest radiographs. Digit Health. 2022;8:20552076221120317.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
57.  Goodman KE, Rodman AM, Morgan DJ. Preparing Physicians for the Clinical Algorithm Era. N Engl J Med. 2023;389:483-487.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 15]  [Cited by in F6Publishing: 6]  [Article Influence: 6.0]  [Reference Citation Analysis (0)]
58.  Ebell MH, Shaughnessy AF, Slawson DC. Why Are We So Slow to Adopt Some Evidence-Based Practices? Am Fam Physician. 2018;98:709-710.  [PubMed]  [DOI]  [Cited in This Article: ]
59.  Norori N, Hu Q, Aellen FM, Faraci FD, Tzovara A. Addressing bias in big data and AI for health care: A call for open science. Patterns (N Y). 2021;2:100347.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 84]  [Cited by in F6Publishing: 87]  [Article Influence: 29.0]  [Reference Citation Analysis (0)]
60.  Celi LA, Cellini J, Charpignon ML, Dee EC, Dernoncourt F, Eber R, Mitchell WG, Moukheiber L, Schirmer J, Situ J, Paguio J, Park J, Wawira JG, Yao S; for MIT Critical Data. Sources of bias in artificial intelligence that perpetuate healthcare disparities-A global review. PLOS Digit Health. 2022;1:e0000022.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 11]  [Cited by in F6Publishing: 53]  [Article Influence: 26.5]  [Reference Citation Analysis (0)]
61.  Challen R, Denny J, Pitt M, Gompels L, Edwards T, Tsaneva-Atanasova K. Artificial intelligence, bias and clinical safety. BMJ Qual Saf. 2019;28:231-237.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 268]  [Cited by in F6Publishing: 311]  [Article Influence: 62.2]  [Reference Citation Analysis (0)]
62.  Yu KH, Kohane IS. Framing the challenges of artificial intelligence in medicine. BMJ Qual Saf. 2019;28:238-241.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 99]  [Cited by in F6Publishing: 101]  [Article Influence: 16.8]  [Reference Citation Analysis (0)]
63.  Magrabi F, Ammenwerth E, McNair JB, De Keizer NF, Hyppönen H, Nykänen P, Rigby M, Scott PJ, Vehko T, Wong ZS, Georgiou A. Artificial Intelligence in Clinical Decision Support: Challenges for Evaluating AI and Practical Implications. Yearb Med Inform. 2019;28:128-134.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 64]  [Cited by in F6Publishing: 93]  [Article Influence: 18.6]  [Reference Citation Analysis (0)]
64.  Goodman KE, Morgan DJ, Hoffmann DE. Clinical Algorithms, Antidiscrimination Laws, and Medical Device Regulation. JAMA. 2023;329:285-286.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 12]  [Reference Citation Analysis (0)]
65.  Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK, Ashrafian H, Beam AL, Chan AW, Collins GS, Deeks ADJJ, ElZarrad MK, Espinoza C, Esteva A, Faes L, di Ruffano LF, Fletcher J, Golub R, Harvey H, Haug C, Jonas CH, Keane PA, Kelly CJ, Lee AY, Lee CS, Manna E, Matcham J, McCradden M, Monteiro J, Mulrow C, Oakden-Rayner L, Paltoo D, Panico MB, Price G, Rowley S, Savage R, Sarkar R, Vollmer SJ, Yau C. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Lancet Digit Health. 2020;2:e537-e48.  [PubMed]  [DOI]  [Cited in This Article: ]
66.  Cruz Rivera S, Liu X, Chan AW, Denniston AK, Calvert MJ; SPIRIT-AI and CONSORT-AI Working Group;  SPIRIT-AI and CONSORT-AI Steering Group;  SPIRIT-AI and CONSORT-AI Consensus Group. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nat Med. 2020;26:1351-1363.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 239]  [Cited by in F6Publishing: 207]  [Article Influence: 51.8]  [Reference Citation Analysis (0)]
67.  DECIDE-AI Steering Group. DECIDE-AI: new reporting guidelines to bridge the development-to-implementation gap in clinical artificial intelligence. Nat Med. 2021;27:186-187.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 91]  [Cited by in F6Publishing: 75]  [Article Influence: 25.0]  [Reference Citation Analysis (0)]
68.  Raja S, Litle VR. The critical role of learning from investigating and debriefing adverse events. J Thorac Dis. 2021;13:S3-S7.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
69.  Lusk C, DeForest E, Segarra G, Neyens DM, Abernathy JH 3rd, Catchpole K. Reconsidering the application of systems thinking in healthcare: the RaDonda Vaught case. Br J Anaesth. 2022;129:e61-e62.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 5]  [Article Influence: 2.5]  [Reference Citation Analysis (0)]
70.  Beaulieu-Jones BR, Wilson S, Howard DS, Rasic G, Rembetski B, Brotschi EA, Pernar LI. Defining a High-Quality and Effective Morbidity and Mortality Conference: A Systematic Review. JAMA Surg. 2023;158:1336-1343.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1]  [Reference Citation Analysis (0)]
71.  Singh H. Editorial: Helping health care organizations to define diagnostic errors as missed opportunities in diagnosis. Jt Comm J Qual Patient Saf. 2014;40:99-101.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 47]  [Cited by in F6Publishing: 55]  [Article Influence: 5.5]  [Reference Citation Analysis (0)]
72.  Banham-Hall E, Stevens S. Hindsight bias critically impacts on clinicians' assessment of care quality in retrospective case note review. Clin Med (Lond). 2019;19:16-21.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 11]  [Cited by in F6Publishing: 9]  [Article Influence: 1.8]  [Reference Citation Analysis (0)]
73.  Peters DH. The application of systems thinking in health: why use systems thinking? Health Res Policy Syst. 2014;12:51.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 212]  [Cited by in F6Publishing: 233]  [Article Influence: 23.3]  [Reference Citation Analysis (0)]
74.  Astik GJ, Olson APJ. Learning from Missed Opportunities Through Reflective Practice. Crit Care Clin. 2022;38:103-112.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1]  [Cited by in F6Publishing: 1]  [Article Influence: 0.3]  [Reference Citation Analysis (0)]
75.  Lighthall GK, Vazquez-Guillamet C. Understanding Decision Making in Critical Care. Clin Med Res. 2015;13:156- 68.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 42]  [Cited by in F6Publishing: 44]  [Article Influence: 4.9]  [Reference Citation Analysis (0)]
76.  Webster CS, Taylor S, Weller JM. Cognitive biases in diagnosis and decision making during anaesthesia and intensive care. BJA Educ.. 2021;21:420-425.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1]  [Cited by in F6Publishing: 6]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]