In the current study, Kline et al7 describe that patients had a very low prevalence of disease (0.7%; 95% CI, 0.3 to 1.4%) when physicians defined a patient as low risk via a similar unstructured estimate; and, then, the subsequent Simplify test result was negative. However, one should question whether this “unstructured” estimate is truly unstructured. The Charlotte rule was previously developed by the study authors in the same setting as the current investigation.9 Further, the investigators have also analyzed the Canadian score with many of the same faculty participating in patient enrollment.11 As a consequence, it is likely that the physicians providing the unstructured estimates in this institution were familiar with both validated decision rules prior to study initiation. Therefore, the observed physician performance with respect to unstructured estimate may, alternatively, represent some combination use either consciously or unconsciously of the two rules. Anecdotally, we know that many emergency physicians do not routinely utilize any validated algorithm in their decision making related to the evaluation of possible pulmonary embolism because they are unfamiliar with such tools. It remains uncertain whether physicians unfamiliar with the Charlotte and Canadian scoring systems would perform as well the physicians who enrolled patients in the current study. Until independent studies across various settings confirm the reliability of unstructured estimates of risk, patients will be better served if their physicians rely on the aforementioned Charlotte and Canadian scoring systems to determine pretest probability.