We do, however, disagree with two of the views by Drs Norman and Cook. The first is that “students who heard the simulated mitral regurgitation would diagnose mitral regurgitation the next time they heard any murmur.” If this were the case, then we should expect a similar finding for students trained in aortic stenosis. Yet we did not find this; in fact, most students who heard simulated aortic stenosis diagnosed (correctly) mitral regurgitation on a real patient. We also disagree with their inference that there exist well-designed studies demonstrating that simulator training can improve performance on real patients. To support their opinion, they quote two studies, which, by implication, they consider to be well designed. These studies compared two interventions (ie, phonocardiosimulator vs real patients in the study by Aberg et al,3 and compact disc vs human patient simulator in the study by de Giovanni et al4). Both studies had a parallel group design that did not include a control group or a preintervention evaluation. Also, the comparison interventions in both studies were part of a curriculum that also included didactic teaching. Both groups found no difference between interventions in the postintervention performance on real patients. However, because of the study design, it is not possible to establish whether both interventions are equally effective (ie, if transfer occurred equally in both groups) or equally ineffective (if transfer did not occur in either group), and whether the learning gains, if these occurred, were due to the didactic teaching or the target interventions. Although our study has its own limitations, our design did include a control group that received the same amount of simulator training (but no exposure to a cardiac murmur), and our intervention comprised the simulator learning experience only. Thus, we were able to demonstrate that transfer of learning did occur, and that this was associated with exposure to the target murmur on a simulator (mitral regurgitation).