0
Correspondence |

ResponseResponse FREE TO VIEW

Sangeeta M. Bhorade, MD, FCCP; Chuanhong Liao, MS
Author and Funding Information

From the University of Chicago Medical Center.

Correspondence to: Sangeeta M. Bhorade, MD, FCCP, Associate Professor of Medicine, Pulmonary and Critical Care Medicine, University of Chicago Medical Center, 5841 S Maryland Ave, MC 099, Chicago, IL 60637; e-mail: sbhorade@medicine.bsd.uchicago.edu


Funding/Support: Astellas Pharma US, Inc funded the initial multicenter study from which the data in this manuscript were obtained.

Financial/nonfinancial disclosures: The authors have reported to CHEST that no potential conflicts of interest exist with any companies/organizations whose products or services may be discussed in this article.

Reproduction of this article is prohibited without written permission from the American College of Chest Physicians. See online for more details.


Chest. 2014;145(2):418. doi:10.1378/chest.13-2317
Text Size: A A A
Published online
To the Editor:

We thank Dr Mao and colleagues for their interest in our recent article.1 We appreciate that their analysis and comments are in agreement with our analysis and the conclusions of our article. However, we would like to clarify the rationale for our analysis in comparison with their analysis of our data.

  • Dr Mao and colleagues comment that the results of our analysis were “ambiguous,” and they performed a “deeper analysis” of the data. We chose a κ statistic (two categories) or weighted κ (three ordinal categories) as a reliability measure and evaluated each time point separately (as seen in Tables 4, 5, 8, 9, 12, 13).1,2 The analysis indicated that agreement was generally higher at 6 weeks with less agreement at later time points. Dr Mao and colleagues pooled data across time points and, therefore, did not account for the time differential (Tables 1, 2).

  • We did not utilize the McNemar-Bowker test because it is primarily used to test marginal homogeneity, not interrater agreement.3,4 Their analyses do, however, bring out systematic differences between site and central pathologists. It should be noted, though, that by pooling over time points, the correlation of longitudinal or repeated measures data is assumed to be negligible.

  • Dr Mao and colleagues suggest incorporating clinical risk factors of acute rejection into the statistical analysis for determining rejection. Although this is interesting, a scoring system for acute rejection based on clinical parameters was beyond the scope of this article. Future studies should evaluate this further.

  • We disagree with the statement that “repeated blinded readings of one slide by the same pathologist as a reliable policy is a promising approach to decrease the interobserver disagreement.” It has clearly been shown that clinical information enhances the biopsy readings by the pathologist. The purpose of the article is to understand how we can make these readings more reliable from a clinical perspective.

  • Dr Mao and colleagues comment that the lung transplant rejection schema should be based upon the TNM classification for non-small cell lung cancer. Although the TNM classification system for non-small cell lung cancer is an established system, it does not apply to transplant biopsies or the objective of this article. These comments are irrelevant to the current article.

Acknowledgments

Role of sponsors: Astellas Pharma US, Inc provided funding for the initial multicenter study from which the data in the manuscript were obtained. The sponsor had no input into the study design, data analysis, data collection, or the conduct of the study.

Bhorade SM, Husain AN, Liao C, et al. Interobserver variability in grading transbronchial lung biopsy specimens after lung transplantation. Chest. 2013;143(6):1717-1724. [CrossRef] [PubMed]
 
Cohen J. Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychol Bull. 1968;70(4):213-220. [CrossRef] [PubMed]
 
Fleiss JL. Statistical Methods for Rates and Proportions.2nd ed. New York, NY: Wiley; 1981.
 
Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Fam Med. 2005;37(5):360-363. [PubMed]
 

Figures

Tables

References

Bhorade SM, Husain AN, Liao C, et al. Interobserver variability in grading transbronchial lung biopsy specimens after lung transplantation. Chest. 2013;143(6):1717-1724. [CrossRef] [PubMed]
 
Cohen J. Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychol Bull. 1968;70(4):213-220. [CrossRef] [PubMed]
 
Fleiss JL. Statistical Methods for Rates and Proportions.2nd ed. New York, NY: Wiley; 1981.
 
Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Fam Med. 2005;37(5):360-363. [PubMed]
 
NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

  • CHEST Journal
    Print ISSN: 0012-3692
    Online ISSN: 1931-3543