0
Editorials |

Assessing Quality of Care Using In-Hospital Mortality : Does It Yield Informed Choices? FREE TO VIEW

Edward D. Sivak, MD; Mary A. M. Rogers, PhD
Author and Funding Information

Affiliations: Syracuse, New York 
 ,  Dr. Sivak is Chief, Division of Pulmonary and Critical Care, State University of New York Health Science Center. Dr. Rogers is Assistant Professor of Epidemiology, Department of Medicine, State University of New York Health Science Center.

Affiliations: Syracuse, New York 
 ,  Dr. Sivak is Chief, Division of Pulmonary and Critical Care, State University of New York Health Science Center. Dr. Rogers is Assistant Professor of Epidemiology, Department of Medicine, State University of New York Health Science Center.



Chest. 1999;115(3):613-614. doi:10.1378/chest.115.3.613
Text Size: A A A
Published online

The advent of federal initiatives to regulate the cost of health care has spawned efforts to measure the quality of such care. The implementation of Diagnostic Related Groups, the Omnibus Budget Reconciliation Act, and the use of outcomes research represent a process that has evolved from initial attempts to reduce cost to evaluations of the effectiveness of various health-care procedures and programs. The cooperative effort among physicians, providers, and payers that became the Cleveland Health Quality Choice (CHQC) project has been part of this evolution. Its purpose is to identify health-care institutions that provide quality health care so that insurers and patients can make informed decisions about medical care. Community-wide publication of evaluation results is an integral part of this effort. As such, the study by Sirio and colleagues in this issue of CHEST (see page 793) presents a notable analysis in this evolution.

The CHQC study, as presented in this issue, addressed the quality of ICU care by assessing one main patient outcome, in-house mortality. Although the authors’ assessment requires some knowledge of statistical procedures often used in outcomes research, everyone interested in the issue of quality of ICU care should learn something new. The CHQC study compared hospital mortality across institutions. A retrospective cohort design was used to compare actual to predicted mortality after adjustment for severity of illness. A standardized mortality ratio <1 (ie, actual mortality rates significantly less than predicted) demonstrated superior care. Conversely, a standardized mortality ratio (SMR) significantly >1 demonstrated a need for improvement. Most interestingly, the authors discovered some trends with time. SMRs and variation in SMRs declined from 1991 to 1995, which suggests a possible improvement in the quality and consistency of care. However, mean hospital length of stay also declined, whereas the number of discharged patients to skilled nursing facilities increased. This decline in mortality was likely due to a shift in patient care from the hospital to nursing home facilities, rather than due to procedural improvements in the ICU. It is also conceivable that some change may be attributed to the Hawthorne effect, that is, behavior that is monitored may enhance performance. With time, this may wane.

Severity adjustment was necessary for a fair comparison of mortality rates across hospitals. Severity of illness is greater and mortality higher in tertiary referral centers as compared with secondary or primary community hospitals. Severity of illness was measured with a validated prognostic indicator, APACHE III. This study reaffirms the use of APACHE III scores in predicting death; the authors found this tool to have excellent discriminatory ability. This provides reassurance to clinicians that physiologic abnormalities account for a large portion of the risk of hospital death associated with critical illness. For the future, it would be helpful to identify factors related to provider care and patient characteristics that may further explain variation in mortality.

It is not surprising that the predicted mortality from the national sample was somewhat different from the actual mortality in the CHQC sample. Any model derived from a specific database is expected to perform better in that database than in an externally derived model. It remains to be demonstrated whether this locally derived model, when used again with future data, will perform as well. Moreover, the in-hospital death rate from the national sample was 16.5%, somewhat higher than the 11.3% seen in the Cleveland hospitals. The predictive capabilities of a very accurate method can be diminished as the prevalence of the outcome decreases. In other words, if death associated with critical illness declines, the ability of the model, even a national normative model, may be somewhat abated. Thus, the assertion that the locally derived model was superior to that from the national sample may be a bit premature.

The quality of care across institutions, as approximated by the SMR, was rather consistent, with considerable overlap in the 99% confidence intervals among the various hospitals. Essentially, this means that it is difficult to distinguish the best hospitals from the average hospitals as well as the average from the less than average hospitals, on the basis of mortality experience. At four institutions, however, actual mortality differed significantly from predicted mortality; one large teaching hospital yielded a significant reduction in mortality, whereas three hospitals had significant increases in mortality in comparison with what would be expected after severity adjustment. Although this may suggest extremes in quality of care at a community level, it may reflect referral practices to skilled facilities or to other hospitals. It would be informative to inspect the proportion of patients referred from each hospital. Just how patients and their families can use the information derived from the CHQC project is open to question. If comparisons among institutions are confounded by referral practices, can decisions regarding ICU quality yield informed choices?

This study is a stepping stone in the use of prediction tools to assess quality of care. Very clearly, mortality is a crude indicator of quality and hopefully, in the future, other patient-derived measures of quality of life will be assessed as well. Vital status is perhaps the simplest and most complete outcome to obtain on all patients. Even then the methodology can be fine-tuned. As more patients are transferred to nursing facilities, the length of life from hospital ICU admission to death (wherever that may occur) may be more appropriate. Thus, models that can use time to death, such as survival analysis and proportional hazards models, may become more necessary.

The greatest lesson to be learned from this study is that players quickly adapt to the rules of the game. Because hospitals know that they are being evaluated on the numbers of patients who die in-house, severely ill patients are being transferred to other facilities. It is significant that CHQC identified these trends. Clearly, an assessment of quality of ICU care across institutions should use other prospective measures. These may include assessments that address the quality of remaining life and not just the rate of death.


Figures

Tables

References

NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Find Similar Articles
CHEST Journal Articles
PubMed Articles
Guidelines
Diagnostic laparoscopy in the ICU. In: Diagnostic laparoscopy guidelines.
Society of American Gastrointestinal and Endoscopic Surgeons | 4/10/2009
  • CHEST Journal
    Print ISSN: 0012-3692
    Online ISSN: 1931-3543