In the Bhorade et al1 study, the interobserver agreement for grade A and grade B readings were presented as the overall concordance rate, as well as that determined by treatment arm and clinical symptoms. The overall concordance rate ranged from 62% to 91%, according to the data from the tables in the article1; however, it should be noted that the interobserver agreement provided by the authors was ambiguous and requires further analysis. Therefore, we conducted κ analysis to reevaluate the concordance of interpretations for acute rejection between site pathologist and central pathologist (based on the data presented in tables in the article). The score of Cohen κ coefficients ranged from 0 to 1, where κ scores ≥ 0.75 represent fair agreement, scores < 0.4 represent poor agreement, and the scale of 0.4 to 0.75 was considered moderate agreement. The McNemar-Bowker test was performed to estimate the diagnostic differences between site pathologist and central pathologist. After thorough statistical analysis of the data from Tables 2 and 3 in the Bhorade et al1 article, we found that site pathologists were more likely to judge the acute rejection at a higher level for both grade A readings and grade B readings (P = .000 and .002, respectively), and the κ scores showed poor interobserver agreement for both grade A and grade B readings (κ = 0.276 and 0.195, respectively).