0
Original Research: CRITICAL CARE MEDICINE |

Variation in ICU Risk-Adjusted Mortality*: Impact of Methods of Assessment and Potential Confounders FREE TO VIEW

Michael W. Kuzniewicz, MD, MPH; Eduard E. Vasilevskis, MD; Rondall Lane, MD, MPH; Mitzi L. Dean, MS, MHA; Nisha G. Trivedi, MD; Deborah J. Rennie, BA; Ted Clay, MS; Pamela L. Kotler, PhD; R. Adams Dudley, MD, MBA
Author and Funding Information

*From the Institute for Health Policy Studies, University of California, San Francisco, San Francisco, CA.

Correspondence to: Michael W. Kuzniewicz, MD, MPH, Assistant Adjunct Professor, Neonatology, University of California, San Francisco, 533 Parnassus Ave, UC Hall U585-F, San Francisco, CA 94143-0748; e-mail: kuzniewiczm@peds.ucsf.edu



Chest. 2008;133(6):1319-1327. doi:10.1378/chest.07-3061
Text Size: A A A
Published online

Background: Federal and state agencies are considering ICU performance assessment and public reporting; however, an accurate method for measuring performance must be selected. In this study, we determine whether a substantial variation in ICU mortality performance still exists in modern ICUs, and compare the predictive accuracy, reliability, and data burden of existing ICU risk-adjustment models.

Methods: A retrospective chart review of 11,300 ICU patients from 35 California hospitals from 2001 to 2004 was performed. We calculated standardized mortality ratios (SMRs) for each hospital using the mortality probability model III (MPM0 III), the simplified acute physiology score (SAPS) II, and the acute physiology and chronic health evaluation (APACHE) IV risk-adjustment models. We compared discrimination, calibration, data reliability, and abstraction time for the models.

Results: Regardless of the model used, there was a large variation in SMRs among the ICUs studied. The discrimination and calibration were adequate for all risk-adjustment models. APACHE IV had the best discrimination (area under the receiver operating characteristic curve [AUC], 0.892) compared to MPM0 III (AUC, 0.809), and SAPS II (AUC, 0.873; p < 0.001). The models differed substantially in data abstraction times, as follows: MPM0III, 11.1 min (95% confidence interval [CI], 8.7 to 13.4); SAPS II, 19.6 min (95% CI, 17.0 to 22.2); and APACHE IV, 37.3 min (95% CI, 28.0 to 46.6).

Conclusions: We found substantial variation in the ICU risk-adjusted mortality rates that persisted regardless of the risk-adjustment model. With unlimited resources, the APACHE IV model offers the best predictive accuracy. If constrained by cost and manual data collection, the MPM0 III model offers a viable alternative without a substantial loss in accuracy.

Figures in this Article

Over the last decade, significant effort has been directed at improving the quality of medical care in the United States.1The ICU has been a focus because of the acuity of illness and complexity of care.23 Many organizations, including the Joint Commission on Accreditation of Healthcare Organizations (JCAHO),4the Centers for Medicare and Medicaid Services, and the California Office of Statewide Health Planning and Development,5 are considering ICU performance assessment and public reporting.

Comparing hospitals using ICU mortality rates is complicated. Mortality is significantly affected by patient demographics, comorbidities, and severity of illness.6Risk-prediction models have been developed to adjust for these factors.710 The following three most widely used models have been updated over the past few years: the mortality probability model III (MPM0 III)11; the simplified acute physiology score (SAPS) III1213; and the acute physiology and chronic health evaluation (APACHE) IV (APACHE is a registered trademark of Cerner Corporation, Kansas City, MO).

These models differ substantively in the number and type of variables used to assess mortality risk. Prior studies1418 have compared older versions of these models; however, the predictive accuracy of these updated models has not been directly compared. In addition, differences in the data collection burden have not been evaluated. Clinicians or outside groups who wish to evaluate ICU performance do not have the data they need to make an informed choice among models.

Patient and hospital factors not included in the models may influence ICU outcomes as well. The impact of do-not-resuscitate (DNR) orders, the utilization of hospice services for terminally ill patients,1921 and the rate of specialist consultation have not been evaluated.2224

The California Intensive Care Outcomes (CALICO) project was designed to identify valid methods for assessing ICU mortality performance. The objectives of the study were to determine whether substantial variation in ICU mortality performance still exists in modern ICUs, and to compare the ICU risk-prediction models for predictive accuracy, reliability, and data burden. In addition, we assessed the impact of patient factors and hospital characteristics that are not included in the risk-prediction models on performance ratings.

Hospital and ICU Selection

All California hospitals were sent a recruiting packet, and a network of volunteer hospitals was established through follow-up mailings and regional presentations. Hospitals that volunteered provided nurses for data abstraction.

Patient Selection

Hospitals collected data on consecutive patients who were discharged from the hospital or died after an eligible ICU admission. The inclusion criteria were age ≥ 18 years, admission to an adult ICU, and ICU stay of ≥ 4 h. We excluded burn, trauma, and coronary artery bypass graft patients, for whom condition-specific risk-adjustment models already exist. The sample size at each hospital was a function of its annual ICU admissions. Hospitals with < 1,200 admissions per year were asked to submit data on 200 patients; hospitals with 1,200 to 2,400 admissions per year, data on 400 patients; and hospitals with > 2,400 admissions per year, data on 600 patients. We collected data from August 2001 to September 2004.

Risk Models and Variables

We used the MPM0 III and SAPS II models as specified in their original and updated publications.7,9,11SAPS III became available after data collection for this project began, so we did not capture all of its required data elements.1213 For the APACHE model, we used the original APACHE III publication8 and the specifications for APACHE IV detailed on the Web sites of the Cerner Corporation2526 and JCAHO.4 SAPS II and APACHE IV use physiologic data from the first 24 h after ICU admission. In contrast, MPM0 III uses physiologic data from 1 h before or after ICU admission.

Data Collection

Data collectors attended a training session, completed sample chart abstractions, and received feedback on their performance before starting data collection. Data were entered into custom software that incorporated automated checks on data quality.

Interrater Reliability

Data were reabstracted by auditors on a 5% random sample of patients. κ statistics were calculated for interrater reliability between the data abstractor and the auditor. The auditors were clinical nurses who were trained by the authors (R.A.D., M.W.K., and R.L.) and completed extensive sample chart abstraction.

Model Development

We divided our data into a development sample (60%) and a validation sample (40%). In the development sample, we used logistic regression to reestimate the coefficients of the models. Since the APACHE IV model incorporates a large number of diagnoses, even with our large data set, it was not possible to reestimate the coefficients for 10 diagnoses (74 patients). Consequently, we took the following three approaches to reestimation: (1) reestimating the model after grouping these 10 diagnoses into a single category; (2) reestimating the model and excluding patients with these diagnoses; and (3) reestimating the variables in the model except for the diagnosis coefficients and using the coefficients as originally specified.

Data Collection Burden

We developed a data abstraction tool for each model and compared the time needed to complete the abstraction. In 30 randomly selected patients, three auditors reabstracted the data, with each one using a different model. The auditors alternated the model they were using for abstraction so that one auditor used each model on one third of the patients. The auditors were the same as those used to assess the interrater reliability.

Hospital Sample Comparison

We compared the hospital characteristics (ie, number of beds, JCAHO accreditation, Accreditation Council for Graduate Medical Education [ACGME] residency, medical school affiliation, ownership, and number of medical/surgical ICU beds) of our sample with all California hospitals with > 50 hospital beds using the 2006 American Hospital Association data.

Additional Risk Factors

We obtained hospital-level data on the utilization of hospice and physician resources from 1999 to 2003 from the Dartmouth Atlas (www.dartmouthatlas.org).23 For each hospital, we examined the mean number of medical specialist visits per decedent, the percentage of decedents with ≥ 10 physician visits, and the percentage of decedents in a hospice during the last 6 months of life.

Statistical Analysis

Statistical analyses were performed using a statistical software package (STATA, version 9.0; StataCorp; College Station, TX).27 We compared the characteristics of CALICO hospitals with all California hospitals using the χ2 test and the Fisher exact test for dichotomous and categorical variables, respectively, and the Student t test for continuous variables.

For each model, we evaluated hospital performance using standardized mortality ratios (SMRs) with 95% confidence limits.28The SMR was calculated by dividing the observed hospital mortality ratio by the mean predicted mortality ratio, as determined by the risk-adjustment model. We determined the model discrimination by calculating the area under the receiver operating characteristic curve (AUC). We assessed calibration with Hosmer-Lemeshow (H-L) goodness-of-fit tests and calibration curves.29Pair-wise comparisons of the AUCs were performed using the DeLong method.30 Abstraction times were compared among the models using repeated-measures analysis of variance.27

We compared the original rankings of hospitals (using the APACHE IV model) to their rankings after excluding patients who had DNR orders within the first day of ICU admission, using Spearman rank correlation coefficients. We also assessed the impact of hospice access and physician utilization measures as independent predictors of ICU mortality and SMR rankings. Each hospital-level variable from the Dartmouth Atlas was assumed to apply equally to each patient cared for within the specified hospital. A new mortality prediction model was created using each nonclinical variable, and the original APACHE IV mortality predicted probability as independent predictors of hospital mortality. Coefficients and new mortality predictions were estimated using the development data set. We then compared the SMR rankings of hospitals with these additional variables to their original rankings using Spearman rank correlation coefficients. The institutional review boards of the University of California, San Francisco, and the State of California approved the study.

Thirty-five hospitals submitted data on 12,409 patients. A total of 1,047 patients did not meet the study entry criteria (714 patients were readmitted to the ICU, 43 patients had ICU stays of < 4 h, 266 had their diagnoses excluded, and 24 patients were < 18 years of age). We excluded 62 patients (0.5%) because a mortality prediction for one or more of the models could not be generated due to incomplete data (19 patients were missing an outcome, 28 patients were missing ICU length of stay, 8 patients were missing pre-ICU length of stay, and 7 patients were missing the reason for ICU admission). A total of 11,300 patients were used to compare the models. The characteristics of CALICO hospitals did not differ from those of all California hospitals (Table 1 ). The patient characteristics are displayed in Table 2 . The mean ICU in-hospital mortality rate was 15.6%.

Interrater Reliability

For physiologic variables, interrater reliability was excellent, with agreement ranging from 91.5 to 98.8%, and weighted κ statistics ranging from 0.72 to 0.96. The agreement on the overall Glasgow coma scale score was lower at 86% (κ = 0.55). The least reliable variable was the APACHE reason for ICU admission (agreement, 52.3%; κ = 0.51).

Predictive Performance of the Models

The discrimination and calibration of the APACHE IV model did not vary whether using the model as originally specified or any of the three reestimation strategies. Since the performance did not vary, subsequent analyses included the APACHE IV model reestimated with low-volume diagnostic categories grouped into one diagnostic category. Discrimination was high for all of the models. AUCs from the three models ranged from 0.809 to 0.892 (Table 3 ). The APACHE IV model was superior in discrimination to the MPM0 III model (p < 0.001) and the SAPS II model (p < 0.001). H-L statistics indicated no significant departures from perfect fit for the MPM0 III and SAPS II models. Although the H-L statistics were higher for the APACHE IV model, the calibration curve illustrated fit across deciles of risk that was comparable to those of the other models (Fig 1 ).

Overall, the SMRs for the models were as follows: MPM0 III, 1.04 (95% confidence interval [CI], 0.97 to 1.11); SAPS II, 1.04 (95% CI, 0.97 to 1.11); and APACHE IV, 1.03 (95% CI, 0.96 to 1.10). Stratifying by medical/surgical admission type, all three models had similar ratios of observed/expected deaths (Table 4 ). All models underpredicted the number of deaths in patients with pulmonary diagnoses. The MPM0 III and SAPS II models overpredicted the number of deaths for patients admitted secondary to genitourinary disorders, overdose/poisoning, and metabolic disorders; however, there were < 200 patients and 10 deaths in these categories.

Variation in Hospital Performance

We calculated the SMRs for the three models for each hospital. Regardless of the model used, there was a large variation in the SMR ranges among these hospitals, as follows: MPM0 III, 0.37 to 1.95; SAPS II, 0.59 to 1.97; and APACHE IV, 0.61 to 1.54. Figure 2 shows the SMR and 95% CI of each hospital that submitted data on at least 100 patients. We compared the SMRs of hospitals in relation to hospital size, medical school affiliation, and ACGME residency program using the APACHE IV model. The SMR of hospitals with < 300 total beds (1.01; 95% CI, 1.00 to 1.02) was approximately the same as that for hospitals with > 300 total beds (1.01; 95% CI, 1.00 to 1.02). Hospitals without a medical school affiliation had more deaths than expected (SMR, 1.08; 95% CI, 1.07 to 1.09), while hospitals with a medical school affiliation had fewer deaths than expected (SMR, 0.94; 95% CI, 0.93 to 0.95). Similarly, hospitals with an ACGME residency (SMR, 0.94; 95% CI, 0.93 to 0.95) had better performance than hospitals without a residency program (SMR, 1.06; 95% CI, 1.05 to 1.07).

Data Collection Burden

The mean data abstraction times were as follows: MPM0 III model, 11.1 min (95% CI, 8.7 to 13.4 min); SAPS II model, 19.6 min (95% CI, 17.0 to 22.2 min); and APACHE IV model, 37.3 min (95% CI, 28.0 to 46.6 min). These differences were statistically significant at p < 0.001.

Additional Patient and Hospital Characteristics

Hospital rankings after excluding patients who instituted DNR orders within the first 24 h after ICU admission were strongly correlated with the original rankings (Spearman rank correlation coefficient, 0.92). The Dartmouth Atlas of Healthcare variables were available for 10,719 patients (94.9%). The addition of the variable indicating hospice utilization did not significantly change the SMRs. The correlation between SMR rankings using the original APACHE IV model and the model including this variable was 0.99. For each additional specialist visit per decedent during the last 6 months of life, there was a corresponding 1.6% decrease in the odds of death (95% CI, 0.7 to 2.5%). For each percent increase in patients seeing ≥10 physicians during the last 6 months of life, there was a 1.8% decrease in the odds of death (95% CI, 0.9 to 2.8%). The SMR rankings between the original model and models including these variables were strongly correlated, with coefficients of 0.88 and 0.86, respectively.

In this project, we found substantial variation in ICU risk-adjusted mortality rates. The variation persisted regardless of the risk-adjustment model used, and even after the adjustment for several patient and hospital characteristics that were not included in prior studies. The apparent variation in outcomes underscores both the need for clinicians to have tools for assessing quality, and the rationale JCAHO and others have expressed for measuring and publicly reporting ICU performance.31

Variation in outcomes may represent true differences in performance or merely the inability of the risk-adjustment models to account for unmeasured differences in case mix. There is no “gold standard” against which to judge the available models. However, all models achieved adequate calibration and discrimination. While the APACHE IV model had superior discrimination to the MPM0 III model, a potential explanation, other than being a better model, is that the MPM0 III model uses only physiologic data from within 1 h of ICU admission in contrast to using data from the first 24 h after ICU admission. Using data collected after ICU admission may improve mortality predictions but may adjust for derangements that one would not want to consider in risk adjustment. For example, while hypotension from severe sepsis should be included, hypotension caused by a drug reaction after ICU admission should not. The SAPS III developers have limited the collection time for physiologic variables to the first hour after ICU admission.1213

All models performed well in elective surgery, emergency surgery, and medical subgroups. All three models underpredicted the number of deaths in patients with pulmonary diagnoses. A possible explanation is that modern ventilators may be able to adequately correct the metabolic disturbances resulting in lower mortality predictions, but cannot change the individual’s true risk of mortality. There was wide variation in how the models performed in genitourinary, overdose/poisoning, and metabolic diagnoses. The MPM0 III and SAPS II models overpredict mortality in these patients. These patients may have metabolic disturbances that increase the predictions of mortality, but are treatable conditions and do not have a high mortality rate. This notion is supported by the mortality rates for these groups ranging from only 3 to 6.5%, compared to the overall mortality rate in the ICU of 15.6%. The APACHE IV model better predicts mortality in these three groups, which may be the result of including a specific reason for ICU admission in its risk prediction.

End-of-life care may influence assessments of mortality performance. When inpatient mortality is the end point, hospitals that seek DNR status and offer palliative inpatient care are potentially at risk to look worse, and hospitals that refer many patients to hospice may look better. However, we found that neither of these factors had a significant impact on assessments of hospital performance.

In terms of access to care, we found that patients in hospitals with populations of patients who had seen more specialists or ≥ 10 providers in the last 6 months of life had a reduced risk of mortality. However, the impact of these factors on hospitals rankings was small.

In evaluating the data collection burden, data for the APACHE IV model took abstractors twice as long to collect as data for the SAPS model, and three times as long as data for the MPM0 III model. Data collection for the MPM0 III model is easiest because they are limited to data at the time around ICU admission; collecting data for the APACHE IV model is the most time consuming because it requires data collection over the first ICU day plus a detailed reason for ICU admission. These differences may be less important if data collection can be automated through the use of an electronic medical record, but as yet few hospitals have this capability.

Since the performance of all of the models deteriorates over time or when applied to populations distinct from the ones that were the basis for their development, another factor to consider is the ability to reestimate or customize the model to the population being studied.3233 With its large number of variables and diagnostic categories, it is more difficult to reliably reestimate the coefficients in the APACHE IV model than those in other models. Even with our large database, we were unable to reestimate all of the acute diagnosis coefficients without manipulating the original model.

While our study was not designed to specifically examine the differences in hospital characteristics that were associated with performance, we found that hospitals affiliated with a medical school or an ACGME residency had fewer deaths than expected, while hospitals without such an affiliation had a higher number of deaths than expected. There was no difference in SMRs when comparing hospitals with < 300 beds to hospitals with ≥ 300 beds. Our sample size of 35 hospitals did not afford us the ability to look further at individual hospital characteristics to determine the factors associated with performance. However, the wide variation in ICU performance further illustrates the need for large-scale performance reporting so that this can be done.

There are several limitations to our study. Only 10.2% of the hospitals in California participated, although the hospitals were comparable to the spectrum of California hospitals as a whole. Many hospitals expressed that they did not have the resources to participate in our study and to provide a data collector. Consequently, our study may have been biased to hospitals with more funding and quality-improvement personnel.

Since we collected an unequal number of patients at each hospital to minimize the burden on smaller hospitals, smaller hospitals have larger CIs for their SMR and less chance of being labeled an outlier. In this study, our intention was not to identify outliers, but merely to demonstrate the range of risk-adjusted mortality rates.

We were unable to compare the models using the updated SAPS III model because of limitations with data collection. The time needed to abstract the variables for the SAPS III model may be shorter than that for SAPS II, since only data from 1 h after ICU admission would be needed instead of the entire day after ICU admission. Despite not having the most updated SAPS model, the AUC of our reestimated SAPS II model was comparable to that of the SAPS III model reported by its developers (SAPS II, 0.873; SAPS III, 0.848).13

Finally, we collected data over a 3-year period. Medical advances over this period may have decreased the SMRs of hospitals that submitted patients later in our data collection period, but when we examined the SMRs over time, we saw no temporal trend in hospital performance.

Our study was consistent with past studies1518 showing that all three models had adequate discrimination, with the APACHE model being superior. As in prior studies, reestimation of the models was needed to achieve adequate calibration when applied to populations different than the population on which each model had been developed. Unlike other studies, we offer data about additional factors to take into account when choosing a model, such as the interrater reliability of the variables, the data collection burden, and the ability to customize the model to a new population. In addition, we have included the most current versions of the APACHE and MPM models.

In summary, significant variation exists in the risk-adjusted mortality rates among ICUs, regardless of the risk-adjustment model used, suggesting that performing routine performance assessment would be valuable. The selection of an ICU mortality risk-adjustment model by JCAHO, the Centers for Medicare and Medicaid Services, or a large state such as California for performance measurement will be a significant event for providers and patients. The final choice of a model will need to reflect value judgments in addition to empirical findings. We have shown that with unlimited resources, the APACHE IV model offers the best predictive accuracy; however, it may be hard to customize the model for the population to which it is being applied. The MPM0 III model offers a viable alternative without substantial loss in accuracy, and is simpler to collect and customize to another population. In addition, the MPM0 III and SAPS III models utilize physiologic data only from within 1 h of ICU admission, which may be easier to collect and less likely to be affected by medical care after admission to the ICU, resulting in a more accurate severity-of-illness assessment.

Abbreviations: ACGME = Accreditation Council for Graduate Medical Education; APACHE = acute physiology and chronic health evaluation; AUC = area under the receiver operating characteristic curve; CALICO = California Intensive Care Outcomes; CI = confidence interval; DNR = do not resuscitate; H-L = Hosmer-Lemeshow; JCAHO = Joint Commission on Accreditation of Healthcare Organizations; MPM0 III = mortality probability model III; SAPS = simplified acute physiology score; SMR = standardized mortality ratio

This research was supported by the California Office of Statewide Health Planning and Development, the Agency for Healthcare Research and Quality (grant R01 HS13919–01), a Robert Wood Johnson Foundation Investigator Award in Health Policy, and the Glaser Pediatric Research Network.

The authors have reported to the ACCP that no significant conflicts of interest exist with any companies/organizations whose products or services may be discussed in this article.

Table Graphic Jump Location
Table 1. Hospital Demographics*
* 

Values are given as No. (%) or mean ± SD, unless otherwise indicated.

 

Includes hospitals with an ICU and ≥ 50 hospital beds, and includes CALICO hospitals.

Table Graphic Jump Location
Table 2. Patient Characteristics*
* 

ED = emergency department; PACU = postanesthesia care unit; ICH = intracranial hemorrhage; CHF = congestive heart failure.

Table Graphic Jump Location
Table 3. Performance of the Models in Validation Sample (n = 4,552)
* 

10 degrees of freedom; p values are in parentheses.

Figure Jump LinkFigure 1. Calibrations curves. Top, A: MPM0 III. Middle, B: SAPS II. Bottom, C: APACHE IV models.Grahic Jump Location
Table Graphic Jump Location
Table 4. Performance of the Models by System and Medical/Surgical*
* 

Values are given as No. (95% CI). O/E = observed/expected; GU = genitourinary; OD = overdose.

We thank the 35 hospitals that participated in the study and particularly recognize the effort of the data abstractors.

Kohn, LT CJ, Donaldson, MS (2000)To err is human: building a safer health system. National Academy Press. Washington, DC:
 
McMillan, TR, Hyzy, RC Bringing quality improvement into the intensive care unit.Crit Care Med2007;35,S59-S65. [PubMed] [CrossRef]
 
Stockwell, DC, Slonim, AD Quality and safety in the intensive care unit.J Intensive Care Med2006;21,199-210. [PubMed]
 
Joint Commission on Accreditation of Healthcare Organizations. National Hospital Quality Measures: ICU. Available at: www.jointcommission.org/PerformanceMeasurement/MeasureReserveLibrary/spect+manual+-+ICU.htm. Accessed May 7, 2008.
 
Office of Statewide Health Planning and Development California (OSHPD). Healthcare outcomes page: California Intensive Care Outcomes (CALICO). Available at: http://www.oshpd.ca.gov/HID/Products/PatDischargeData/ICUDataCALICO. Accessed May 7, 2008.
 
Garland, A Improving the ICU: part 1.Chest2005;127,2151-2164. [PubMed]
 
Lemeshow, S, Teres, D, Klar, J, et al Mortality probability models (MPM II) based on an international cohort of intensive care unit patients.JAMA1993;270,2478-2486. [PubMed]
 
Knaus, WA, Wagner, DP, Draper, EA, et al The APACHE III prognostic system: risk prediction of hospital mortality for critically ill hospitalized adults.Chest1991;100,1619-1636. [PubMed]
 
Le Gall, JR, Lemeshow, S, Saulnier, F A new simplified acute physiology score (SAPS II) based on a European/North American multicenter study.JAMA1993;270,2957-2963. [PubMed]
 
Knaus, WA, Draper, EA, Wagner, DP, et al APACHE II: a severity of disease classification system.Crit Care Med1985;13,818-829. [PubMed]
 
Higgins, TL, Teres, D, Copes, WS, et al Assessing contemporary intensive care unit outcome: an updated mortality probability admission model (MPM0-III).Crit Care Med2007;35,827-835. [PubMed]
 
Metnitz, PG, Moreno, RP, Almeida, E, et al SAPS 3: from evaluation of the patient to evaluation of the intensive care unit; part 1. Objectives, methods and cohort description.Intensive Care Med2005;31,1336-1344. [PubMed]
 
Moreno, RP, Metnitz, PG, Almeida, E, et al SAPS 3: from evaluation of the patient to evaluation of the intensive care unit; part 2. Development of a prognostic model for hospital mortality at ICU admission.Intensive Care Med2005;31,1345-1355. [PubMed]
 
Moreno, R, Morais, P Outcome prediction in intensive care: results of a prospective, multicentre, Portuguese study.Intensive Care Med1997;23,177-186. [PubMed]
 
Livingston, BM, MacKirdy, FN, Howie, JC, et al Assessment of the performance of five intensive care scoring models within a large Scottish database.Crit Care Med2000;28,1820-1827. [PubMed]
 
Glance, LG, Osler, TM, Dick, A Rating the quality of intensive care units: is it a function of the intensive care unit scoring system?Crit Care Med2002;30,1976-1982. [PubMed]
 
Beck, DH, Taylor, BL, Millar, B, et al Prediction of outcome from intensive care: a prospective cohort study comparing acute physiology and chronic health evaluation II and III prognostic systems in a United Kingdom intensive care unit.Crit Care Med1997;25,9-15. [PubMed]
 
Harrison, DA, Brady, AR, Parry, GJ, et al Recalibration of risk prediction models in a large multicenter cohort of admissions to adult, general critical care units in the United Kingdom.Crit Care Med2006;34,1378-1388. [PubMed]
 
Wachter, RM, Luce, JM, Hearst, N, et al Decisions about resuscitation: inequities among patients with different diseases but similar prognoses.Ann Intern Med1989;111,525-532. [PubMed]
 
Azoulay, É, Pochard, F, Garrouste-Orgeas, M, et al Decisions to forgo life-sustaining therapy in ICU patients independently predict hospital death.Intensive Care Med2003;29,1895-1901. [PubMed]
 
Pritchard, RS, Fisher, ES, Teno, JM, et al Influence of patient preferences and local health system characteristics on the place of death: SUPPORT Investigators; Study to Understand Prognoses and Preferences for Risks and Outcomes of Treatment.J Am Geriatr Soc1998;46,1242-1250. [PubMed]
 
Pronovost, PJ, Angus, DC, Dorman, T, et al Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review.JAMA2002;288,2151-2162. [PubMed]
 
Wennberg, JE, Fisher, ES, Stukel, TA, et al Use of hospitals, physician visits, and hospice care during last six months of life among cohorts loyal to highly respected hospitals in the United States.BMJ2004;328,607. [PubMed]
 
Fuchs, VR Floridian exceptionalism. Health Aff. 2003; ;
 
Cerner Corporation. APACHE III public domain information. Available at: http://www.apache-web.com/public/pub_main.html. Accessed May 7, 2008.
 
Cerner Corporation. APACHE. Available at: www.cerner.com/public/MillenniumSolution.asp?id=3562.
 
StataCorp... Stata statistical software: release 9. 2005; . College Station, TX:.
 
Sirio, CA, Shepardson, LB, Rotondi, AJ, et al Community-wide assessment of intensive care outcomes using a physiologically based prognostic measure: implications for critical care delivery from Cleveland Health Quality Choice.Chest1999;115,793-801. [PubMed]
 
Mourouga, P, Goldfrad, C, Rowan, KM Does it fit? Is it good? Assessment of scoring systems.Curr Opin Crit Care2000;6,176-180
 
DeLong, ER, DeLong, DM, Clarke-Pearson, DL Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach.Biometrics1988;44,837-845. [PubMed]
 
Milstein, A, Galvin, RS, Delbanco, SF, et al Improving the safety of health care: the leapfrog initiative.Eff Clin Pract2000;3,313-316. [PubMed]
 
Glance, LG, Osler, TM, Papadakos, P Effect of mortality rate on the performance of the acute physiology and chronic health evaluation II: a simulation study.Crit Care Med2000;28,3424-3428. [PubMed]
 
Tilford, JM, Roberson, PK, Lensing, S, et al Differences in pediatric ICU mortality risk over time.Crit Care Med1998;26,1737-1743. [PubMed]
 

Figures

Figure Jump LinkFigure 1. Calibrations curves. Top, A: MPM0 III. Middle, B: SAPS II. Bottom, C: APACHE IV models.Grahic Jump Location

Tables

Table Graphic Jump Location
Table 1. Hospital Demographics*
* 

Values are given as No. (%) or mean ± SD, unless otherwise indicated.

 

Includes hospitals with an ICU and ≥ 50 hospital beds, and includes CALICO hospitals.

Table Graphic Jump Location
Table 2. Patient Characteristics*
* 

ED = emergency department; PACU = postanesthesia care unit; ICH = intracranial hemorrhage; CHF = congestive heart failure.

Table Graphic Jump Location
Table 3. Performance of the Models in Validation Sample (n = 4,552)
* 

10 degrees of freedom; p values are in parentheses.

Table Graphic Jump Location
Table 4. Performance of the Models by System and Medical/Surgical*
* 

Values are given as No. (95% CI). O/E = observed/expected; GU = genitourinary; OD = overdose.

References

Kohn, LT CJ, Donaldson, MS (2000)To err is human: building a safer health system. National Academy Press. Washington, DC:
 
McMillan, TR, Hyzy, RC Bringing quality improvement into the intensive care unit.Crit Care Med2007;35,S59-S65. [PubMed] [CrossRef]
 
Stockwell, DC, Slonim, AD Quality and safety in the intensive care unit.J Intensive Care Med2006;21,199-210. [PubMed]
 
Joint Commission on Accreditation of Healthcare Organizations. National Hospital Quality Measures: ICU. Available at: www.jointcommission.org/PerformanceMeasurement/MeasureReserveLibrary/spect+manual+-+ICU.htm. Accessed May 7, 2008.
 
Office of Statewide Health Planning and Development California (OSHPD). Healthcare outcomes page: California Intensive Care Outcomes (CALICO). Available at: http://www.oshpd.ca.gov/HID/Products/PatDischargeData/ICUDataCALICO. Accessed May 7, 2008.
 
Garland, A Improving the ICU: part 1.Chest2005;127,2151-2164. [PubMed]
 
Lemeshow, S, Teres, D, Klar, J, et al Mortality probability models (MPM II) based on an international cohort of intensive care unit patients.JAMA1993;270,2478-2486. [PubMed]
 
Knaus, WA, Wagner, DP, Draper, EA, et al The APACHE III prognostic system: risk prediction of hospital mortality for critically ill hospitalized adults.Chest1991;100,1619-1636. [PubMed]
 
Le Gall, JR, Lemeshow, S, Saulnier, F A new simplified acute physiology score (SAPS II) based on a European/North American multicenter study.JAMA1993;270,2957-2963. [PubMed]
 
Knaus, WA, Draper, EA, Wagner, DP, et al APACHE II: a severity of disease classification system.Crit Care Med1985;13,818-829. [PubMed]
 
Higgins, TL, Teres, D, Copes, WS, et al Assessing contemporary intensive care unit outcome: an updated mortality probability admission model (MPM0-III).Crit Care Med2007;35,827-835. [PubMed]
 
Metnitz, PG, Moreno, RP, Almeida, E, et al SAPS 3: from evaluation of the patient to evaluation of the intensive care unit; part 1. Objectives, methods and cohort description.Intensive Care Med2005;31,1336-1344. [PubMed]
 
Moreno, RP, Metnitz, PG, Almeida, E, et al SAPS 3: from evaluation of the patient to evaluation of the intensive care unit; part 2. Development of a prognostic model for hospital mortality at ICU admission.Intensive Care Med2005;31,1345-1355. [PubMed]
 
Moreno, R, Morais, P Outcome prediction in intensive care: results of a prospective, multicentre, Portuguese study.Intensive Care Med1997;23,177-186. [PubMed]
 
Livingston, BM, MacKirdy, FN, Howie, JC, et al Assessment of the performance of five intensive care scoring models within a large Scottish database.Crit Care Med2000;28,1820-1827. [PubMed]
 
Glance, LG, Osler, TM, Dick, A Rating the quality of intensive care units: is it a function of the intensive care unit scoring system?Crit Care Med2002;30,1976-1982. [PubMed]
 
Beck, DH, Taylor, BL, Millar, B, et al Prediction of outcome from intensive care: a prospective cohort study comparing acute physiology and chronic health evaluation II and III prognostic systems in a United Kingdom intensive care unit.Crit Care Med1997;25,9-15. [PubMed]
 
Harrison, DA, Brady, AR, Parry, GJ, et al Recalibration of risk prediction models in a large multicenter cohort of admissions to adult, general critical care units in the United Kingdom.Crit Care Med2006;34,1378-1388. [PubMed]
 
Wachter, RM, Luce, JM, Hearst, N, et al Decisions about resuscitation: inequities among patients with different diseases but similar prognoses.Ann Intern Med1989;111,525-532. [PubMed]
 
Azoulay, É, Pochard, F, Garrouste-Orgeas, M, et al Decisions to forgo life-sustaining therapy in ICU patients independently predict hospital death.Intensive Care Med2003;29,1895-1901. [PubMed]
 
Pritchard, RS, Fisher, ES, Teno, JM, et al Influence of patient preferences and local health system characteristics on the place of death: SUPPORT Investigators; Study to Understand Prognoses and Preferences for Risks and Outcomes of Treatment.J Am Geriatr Soc1998;46,1242-1250. [PubMed]
 
Pronovost, PJ, Angus, DC, Dorman, T, et al Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review.JAMA2002;288,2151-2162. [PubMed]
 
Wennberg, JE, Fisher, ES, Stukel, TA, et al Use of hospitals, physician visits, and hospice care during last six months of life among cohorts loyal to highly respected hospitals in the United States.BMJ2004;328,607. [PubMed]
 
Fuchs, VR Floridian exceptionalism. Health Aff. 2003; ;
 
Cerner Corporation. APACHE III public domain information. Available at: http://www.apache-web.com/public/pub_main.html. Accessed May 7, 2008.
 
Cerner Corporation. APACHE. Available at: www.cerner.com/public/MillenniumSolution.asp?id=3562.
 
StataCorp... Stata statistical software: release 9. 2005; . College Station, TX:.
 
Sirio, CA, Shepardson, LB, Rotondi, AJ, et al Community-wide assessment of intensive care outcomes using a physiologically based prognostic measure: implications for critical care delivery from Cleveland Health Quality Choice.Chest1999;115,793-801. [PubMed]
 
Mourouga, P, Goldfrad, C, Rowan, KM Does it fit? Is it good? Assessment of scoring systems.Curr Opin Crit Care2000;6,176-180
 
DeLong, ER, DeLong, DM, Clarke-Pearson, DL Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach.Biometrics1988;44,837-845. [PubMed]
 
Milstein, A, Galvin, RS, Delbanco, SF, et al Improving the safety of health care: the leapfrog initiative.Eff Clin Pract2000;3,313-316. [PubMed]
 
Glance, LG, Osler, TM, Papadakos, P Effect of mortality rate on the performance of the acute physiology and chronic health evaluation II: a simulation study.Crit Care Med2000;28,3424-3428. [PubMed]
 
Tilford, JM, Roberson, PK, Lensing, S, et al Differences in pediatric ICU mortality risk over time.Crit Care Med1998;26,1737-1743. [PubMed]
 
NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Find Similar Articles
CHEST Journal Articles
PubMed Articles
  • CHEST Journal
    Print ISSN: 0012-3692
    Online ISSN: 1931-3543