INTRODUCTION
Surgical site infections (SSIs) continue to occur and remain a significant cause of disability among operated patients, in spite of the substantial advances in our understanding of their epidemiology, pathogenesis, and prevention. This short review is intended to explain how the occurrence of SSI has emerged as a quality indicator in hospital quality assurance programmes worldwide. Current limitations of SSI as a benchmarking tool and areas of further research are identified.
HOSPITAL-ACQUIRED INFECTION RATES AS PERFORMANCE MEASURES
In the past two decades, we have witnessed striking changes in the way healthcare systems supply medical services and patients purchase those services. The recognition that patients may be exposed to preventable process mistakes that may be potentially harmful for their health (specially during hospitalization), as well as the rising costs of healthcare, have shaped an effervescent atmosphere characterized by the increasing demand for hospital performance measures and quality assurance programs by governments and consumers[1,2].
Medical mistakes and accidents that lead, or may lead, to injury during the course of patient assistance have been known to occur for a long time; however, their real magnitude and consequences have been recognized only recently. Similarly, although the concept of measuring and monitoring adverse events that arise in hospitalized patients as direct or indirect consequence of medical assistance was initiated more than three decades ago[3], the term “adverse event” only gained popularity and substance since 1991, with the publication of the shocking Harvard Medical Practice Study I (HMPS-I) by Brennan et al[4]. This was the very first, large-scaled study to measure and quantify with scientific accuracy the prevalence of adverse events and medical negligence in hospitalized patients[4,5]. In the HMPS-I study, over 30 000 medical records of non-psychiatric patients discharged from 51 acute care hospitals in New York State in 1984 were randomly sampled and reviewed for evidence of adverse events and negligence[4]. An adverse event was defined as an injury that was caused by medical management (rather than the underlying disease) and that prolonged the hospitalization, produced a disability at the time of discharge, or both. Negligence was deemed to occur when the care provided to the patient fell below the standard expected of physicians in the community. The authors identified that the statewide incidence rate of adverse events was 3.7%, with 27.6% of these adverse events due to negligence. Although 70.5% of adverse events led to minor or moderate health impairment with complete recovery, 2.6% caused permanent total disability and 13.6% caused death[4].
One decade later, the international medical community was once again shocked with the publication of the report “To Err is Human: Building a Safer Health System”, by the US Institute of Medicine[6]. This report identified that preventable medical mistakes occurred with high frequency in hospitals in the US and abroad, and were responsible for annual costs in the billions, prolongation of hospital stay, and permanent and severe physical disability. It was estimated that about 7% of hospitalized patients are exposed to potential harm from medication errors, and up to 17% of patients admitted to an intensive care unit may eventually suffer a severe adverse event[6]. Largely based on the results of the HMPS-I study, the number of deaths attributable to preventable medical adverse events in US hospitals was conservatively estimated to lay somewhere between 44 000 and 98 000 per year, exceeding the number of deaths due to car crash, breast cancer, or AIDS[6].
After this report, the recommendation to expand reporting of serious adverse events and medical errors, particularly mandatory reporting, received attention[7]. Mandatory public disclosure of hospital performance measures was further catalyzed by demands from consumers, who began to argue that healthcare system users have the right to know about adverse events and the performance of healthcare providers[7]. Until 2002, only 20 states in the US had mandatory reporting systems for hospital adverse events, with the type of adverse event reported varying widely[7]. Prior to 2004, only two states (Pennsylvania and Illinois) had legislation that required healthcare providers to collect and publicly disclose healthcare-associated infection (HAI) rates, including SSI rates[8,9]. In 2004, two additional states, Missouri and Florida, passed disclosure laws[8,9]. As of March 2006, the number of states had raised to seven[10] and, by the end of 2006, laws for mandatory public reporting of HAI rates had been enacted in 15 states[11]. The specific objective of mandatory public reporting is the comparison of performance between different healthcare providers[8]. The comparison of HAI rates between hospitals and countries is often used to draw conclusions about the quality of healthcare and infection control practice[12]. Despite this, there are no controlled published data demonstrating that public reporting of rates of HAI, SSI or other adverse events improves patient outcome or the performance of healthcare providers.
RATES OF SSI AS QUALITY INDICATORS
With the introduction of quality assurance in health care delivery, there has been a proliferation of research studies that compare patient outcomes for similar conditions among many health care delivery facilities. Since the 1990s, increasing interest has been placed in the incorporation of clinical adverse events as quality indicators in hospital quality assurance programs[5]. Adverse post-operative events, and very specially SSI rates after specific procedures, gained popularity as hospital quality indicators in the 1980s[13,14], and are currently some of the most widely used hospital quality indicators worldwide[5,15-17]. Other outcomes or processes frequently proposed as measures of quality in surgical care include postoperative mortality, postoperative long-term survival, postoperative functional status and health-related quality of life, other postoperative morbidity (e.g. anastomotic leak, deep vein thrombosis), patient satisfaction, postoperative length of stay, costs, and access[5,18]. Robust evidence shows that programs for continuous quality improvement in surgical care, based on the measurement and monitoring of outcome-based and process-based quality indicators, with periodic feedback to providers and managers, can be very effective in reducing post-operative complications, patient mortality, and costs[19-22].
The public health importance of a health-related adverse event, and the need to have that event under strict surveillance, are determined by both quantitative and qualitative parameters, which can be summarized as follows[23,24]: (1) the frequency with which the event occurs in the population under study (as measured, for instance, by the incidence or prevalence rates); (2) the severity of the disability that the event causes in the patients (as measured, for instance, by the prolongation of hospital stay, impairment in quality of life, mortality, etc.); (3) the extent to which the adverse event can be prevented or mitigated by applying scientifically validated clinical guidelines or all which is considered good clinical practice by the scientific community; (4) the direct and indirect costs associated with the occurrence of the adverse event; (5) the public interest; (6) the availability of a methodology for the accurate and timely detection of the event; (7) when this event is to be used as a performance measure, the availability of an accurate methodology to adjust for differences in the distribution of factors that determine the risk of developing the event. Although no quality indicator simultaneously fulfills all these criteria, the rates of SSI after selected surgical procedures and the rates of central venous catheter-associated bloodstream infections are considered to meet most of these requirements[9]. Accordingly, the measurement and monitoring of the occurrence of these HAI, as well as the adherence of healthcare providers to recommended practices to prevent the development of these infections (e.g. appropriate insertion of central venous catheters, surgical antimicrobial prophylaxis, etc.) are considered to be priorities[9]. Some authors argue, however, that the uncertainty about the “preventable fraction” of HAIs (i.e. how much the rate of a HAI can be reduced by maximum prevention efforts) and our current empirical limitations in risk adjustment methodologies create ambiguity about using infection rates to determine whether infection-prevention efforts are adequate in a given facility or unit[25].
THE NEED TO ADJUST THE RATES OF SSI FOR CASE MIX
Identifying groups of patients with different risk of developing an SSI may serve two distinct, but related, purposes. First, by stratifying patients according to their risk of developing an SSI, one can improve the efficiency of surveillance programmes by identifying high-risk patients and performing targeted surveillance on the group of selected patients. Second, SSI risk-adjustment allows for meaningful comparison or SSI rates among institutions or surgeons. In the remainder of this section, we shall focus on the second point.
Practicing surgeons know well that the risk of a patient developing an SSI is hard to predict. Very often, patients in which several risk factors are present do not develop an SSI, and patients in which an SSI was not among the expected adverse outcomes eventually develop an infection. This not only reflects the difficulty in predicting SSI risk on an individual basis, but also reflects the more general difficulty in predicting SSI occurrence in a given population. The risk of developing an SSI is influenced by the complex interaction of factors present before, during and after the surgical procedure[26]. These factors represent characteristics inherent in the procedure and the surgical theatre (the so called extrinsic factors) and factors inherent to the patients (the so called intrinsic factors)[26,27]. The factors already reported to influence (i.e. increase or decrease) the risk of developing an SSI amount to a very large number. Depending on the particular distribution of known (or unknown) risk factors for SSI in each patient sample, two or more hospitals or surgeons may experience different rates of SSI due to reasons other than the quality of surgical care provided to their patients. Thus, for a SSI rate to be considered a valid indicator of the quality of care, it is essential that a proper adjustment for patient case mix be performed, so that meaningful comparisons of SSI rates can be made among surgeons, institutions, or over time[28,29]. So far, a significant hindrance to developing meaningful hospital-acquired infection rates that can be used for intra- and inter-hospital comparisons has been the lack of an adequate means of adjusting for case mix[28,29]. Adjusting an infection rate for case mix is the process by which the effects of the differences in the composition (i.e. the distribution of risk factors) of the populations that are being compared are minimized through statistical methods[18]. In this context, the comparison of crude rates of SSI (i.e. without adjustment for case mix) may lead to meaningless conclusions about the quality of care provided by a hospital and, more generally speaking, about its performance[30]. Currently, organizations such as the Society for Healthcare Epidemiology of America (SHEA), the Association for Professionals in Infection Control and Epidemiology (APIC) and the Hospital Infection Control Practices Advisory Committee (HICPAC) recommend that, for purposes of public or private reporting, only rates of HAI that incorporate an adjustment for infection risk be reported[8,9,11]. In the specific case of SSI rates, the use of the National Nosocomial Infections Surveillance (NNIS) system index for adjusting the risk of infection is advised[8,9,11].
THE NATIONAL NOSOCOMIAL INFECTIONS SURVEILLANCE RISK INDEX
There are a number of requirements that a SSI risk-adjustment methodology should meet if it is to be used for routine epidemiologic surveillance[27-29]. Ideally, an SSI risk-adjustment methodology should be: (1) clinically credible, in the sense that it adjusts the risk of infection for factors for which a relationship with the risk of infection is clinically easy to understand; (2) accurate; (3) simple (for example, an additive scale); (4) applicable to all patients and surgical procedures at the end of the surgery; (5) composed by a reduced number of significant variables, easily measurable and collectable; (6) transportable, that is, it should be prospectively validated on specific services or in individual hospitals to document that it predicts a patient’s risk of SSI accurately in populations other than that in which it was developed; (7) above all, it should be clinically effective, in the sense that it provides useful additional information to clinicians, for instance, in terms of discrimination[27-29].
So far, no published SSI risk-adjustment methodology fulfills all these requirements. The most extensively used SSI risk-adjustment methodology worldwide is the NNIS system’s risk index, described in 1991 by Culver et al[28]. Briefly, the NNIS system’s risk index includes 3 risk factors for SSI: an American Society of Anesthesiologists’ physical status preoperative score of 3, 4, or 5; a surgical wound classified as contaminated or as dirty or infected; and an operation lasting more than T hours, with T representing the approximate 75th percentile of duration of surgery and depending on the surgical procedure performed[28]. For cholecystectomy, colon surgery, appendectomy and gastric surgery, the surgical approach (laparoscopic or open) is also incorporated in the score[31]. The NNIS system’s risk index is procedure specific, which means that SSI risk strata are calculated for pre-specified surgical procedure categories. Each factor present in the patient by the end of the surgery adds one point, and the sum over all factors determines the SSI risk stratum in which the patient is placed: 0 through 4 for cholecystectomy, colon surgery, appendectomy and gastric surgery; and 0 through 3 for all other procedure categories[31].
Although this index is indeed clinically credible, simple, and composed by few variables easily measurable and collectable at the end of the surgery, its transportability and clinical effectiveness has not been extensively evaluated outside US hospitals[32]. The problem of transportability is of paramount importance. If a model developed at one site does not apply at other sites, then these facilities may receive a rating better or worse than they deserve. Therefore, the use of such models at facilities other than at those where they were developed should initially involve the careful application of validation techniques to identify specific areas of inconsistency between predictions and outcomes[33]. Another problem with the use of such models is that with rapid changes in clinical practices over time, any predictive model for patient outcome may have limited life. To use the model over a lengthy period, one should conduct routine validation and update at regular intervals to ensure that conditions in the validation population have not changed[33]. This is especially true for the NNIS system’s SSI risk index: this index was described almost two decades ago and, since then, we have experienced dramatic changes regarding pre- and postoperative strategies for the prevention of SSIs. One of these changes is illustrated by the decreasing length of hospital postoperative stay. In the last decades, we have progressively moved to shorter and shorter postoperative hospital stays, so an ever increasing number of SSI are becoming evident after the patient has been discharged from the hospital[34]. The NNIS risk index was developed at a time in which few hospitals around the world had post discharge SSI surveillance programs. In fact, the original validation of the NNIS risk index was performed in a sample of hospitals in which only 30% of them had some kind of post discharge surveillance strategy[28]. So, at first glance, the NNIS risk index would be suitable for assessing in-hospital SSI risk. In recent years, some evidence has accumulated showing that the factors classically associated with SSI occurrence before hospital discharge are poor predictors of the infections that develop after hospital discharge[35,36]. To date, however, no systematic evaluation has been conducted to assess the impact of the SSIs diagnosed after hospital discharge on the performance of the NNIS risk index.
Another empirical challenge for SSI risk-adjustment models is the problem of incomplete post-discharge follow-up. Unfortunately, post discharge surveillance of SSI is laborious, time-consuming and costly, but without structured post discharge surveillance efforts, these infections will be missed. In the NNIS risk index, patients not reached by post discharge surveillance are counted as uninfected (provided that they did not develop the infection during hospital stay)[31], artificially reducing the measured SSI risk. The problem of incomplete follow-up after discharge has been largely overlooked in SSI risk-modeling, and there are few reports in the literature in which the problem of missing post discharge information has been explicitly accounted for[37-42]. In a recent study[43], we have found that incorporating a post discharge surveillance indicator to the NNIS risk index can add potentially useful clinical information, although concerns about the mechanism that leads to missing post discharge information must be borne in mind.
CONCLUSION
Surveillance of HAI is an indispensable tool in infection control. The need to compare rates in one institution with those in others has led to the development of national surveillance systems and risk-stratification models. A great deal of progress toward comparability of SSI rates has been made, but the problem of risk stratification for the purposes of comparing patient populations are still under debate and need further research.
Peer reviewer: Toshifumi Wakai, MD, PhD, Division of Digestive and General Surgery, Niigata University Graduate School of Medical and Dental Sciences,1-757 Asahimachi-dori, Chuo-ku, Niigata City, 951-8510, Japan
S- Editor Li LF L- Editor Lalor PF E- Editor Lin YP