0
Original Research: SIMULATION-BASED TRAINING |

Simulation-Based Objective Assessment Discerns Clinical Proficiency in Central Line Placement: A Construct Validation FREE TO VIEW

Yue Dong, MD; Harpreet S. Suri, MBBS; David A. Cook, MD, MHPE; Kianoush B. Kashani, MD; John J. Mullon, MD; Felicity T. Enders, PhD; Orit Rubin, PhD; Amitai Ziv, MD; William F. Dunn, MD, FCCP
Author and Funding Information

From the Mayo Clinic Multidisciplinary Simulation Center (Drs Dong, Suri, Cook, Kashani, Mullon, and Dunn), the College of Medicine (Drs Dong, Suri, Cook, Kashani, Mullon, Ziv, and Dunn), and the Department of Health Sciences Research (Dr Enders), Mayo Clinic; the Office of Education Research (Dr Cook), Mayo Medical School, Rochester, MN; and the Israel Center for Medical Simulation (Drs Rubin and Ziv), Chaim Sheba Medical Center, Tel Hashomer, Israel.

Correspondence to: William F. Dunn, MD, FCCP, Division of Pulmonary and Critical Care Medicine, College of Medicine, Mayo Clinic, 200 1st St SW, Rochester, MN 55905; e-mail: dunn.william@mayo.edu


Funding/Support: Supported by the Mayo Foundation for Medical Education and Research and The Y&S Nazarian Family Foundation.

Reproduction of this article is prohibited without written permission from the American College of Chest Physicians (www.chestpubs.org/site/misc/reprints.xhtml).

For editorial comment see page 1009


© 2010 American College of Chest Physicians


Chest. 2010;137(5):1050-1056. doi:10.1378/chest.09-1451
Text Size: A A A
Published online

Background:  Central venous catheterization (CVC) is associated with patient risks known to be inversely related to clinician experience. We developed and evaluated a performance assessment tool for use in a simulation-based central line workshop. We hypothesized that instrument scores would discriminate between less experienced and more experienced clinicians.

Methods:  Participants included trainees enrolled in an institutionally mandated CVC workshop and a convenience sample of faculty attending physicians. The workshop integrated several experiential learning techniques, including practice on cadavers and part-task trainers. A group of clinical and education experts developed a 15-point CVC Proficiency Scale using national and institutional guidelines. After the workshop, participants completed a certification exercise in which they independently performed a CVC in a part-task trainer. Two authors reviewed videotapes of the certification exercise to rate performance using the CVC Proficiency Scale. Participants were grouped by self-reported CVC experience.

Results:  One hundred and five participants (92 trainees and 13 attending physicians) participated. Interrater reliability on a subset of 40 videos was 0.71, and Cronbach a was 0.81. The CVC Proficiency Scale Composite score varied significantly by experience: mean of 85%, median of 87% (range 47%-100%) for low experience (0-1 CVCs in the last 2 years, n = 27); mean of 88%, median of 87% (range 60%-100%) for moderate experience (2-49 CVCs, n = 62); and mean of 94%, median of 93% (range 73%-100%) for high experience (> 49 CVCs, n = 16) (P = .02, comparing low and high experience).

Conclusions:  Evidence from multiple sources, including appropriate content, high interrater and internal consistency reliability, and confirmation of hypothesized relations to other variables, supports the validity of using scores from this 15-item scale for assessing trainee proficiency following a central line workshop.

Figures in this Article

There are an estimated 5 million central venous catheters placed in the United States annually.1 Unfortunately, serious and life-threatening complications from these procedures have occurred in up to 5% to 26%,2,3 and complications are inversely related to the level of the practitioner’s clinical experience.4,5 Reducing this risk to patients will require educational interventions to improve practitioner skills, and assessments that identify those with skills in need of improvement or remediation. Simulation may play a role in addressing these needs.

Simulation-based training has been found to improve physician trainee competence and/or facilitate demonstration of proficiency in advanced cardiac life support,6 endotracheal intubation,7 team training,8-11 bronchoscopy,12 otorhinolaryngologic surgery,13 laparoscopy,14 and carotid angiography.15 Although current simulation training and assessment methods may not fully replicate the performance of the procedure on patients, they do appear to accelerate the learning curve and, perhaps most importantly, provide the opportunity to rehearse and demonstrate proper implementation of the devices, sequence of events, humanistic/behavioral skill sets, and safety procedures required.

Few studies have used simulation to facilitate learning how to insert central venous catheters.16-18 The only study describing a simulation-based assessment for measuring trainees’ central venous catheterization (CVC) competence focused on development of the items and pass score but did not report other evidence to support score validity.19 The absence of validated assessment tools challenges educators seeking to determine those with sufficient proficiency to advance to the next level of practice (ie, performance on patients).

The purpose of this study was to develop and evaluate an instrument, the CVC Proficiency Scale, designed to assess the ability of resident physicians to perform the technical and safety aspects of CVC. In the context of a simulation-based central line workshop, we collected evidence to support the validity of instrument scores, including content evidence, reliability, and discrimination of trainee experience. We hypothesized that physicians with more CVC experience would have higher CVC Proficiency Scale scores.

Study Design

We conducted a prospective observational validity study. The current framework for validity recognizes five sources of validity evidence: content (information regarding the instrument development), internal structure (typically score reliability and factor analysis), relations to other variables (association between instrument scores and other variables such as training, previous experience, or scores from another instrument [formerly known as predictive and criterion validity]), response process (respondent characteristics or actions that influence scores, such as rater training), and consequences (the use of scores in practice, including determination of passing score cut points).20-22 We prospectively collected content, internal structure, and relations to other variables evidence for CVC Proficiency Scale scores, as detailed following.

Participants

Residents and fellows in the Departments of Anesthesiology, Internal Medicine, Emergency Medicine, General Surgery (including subspecialty training programs) in the Mayo School of Graduate Medical Education participated in the Central Line Workshop within the experiential training components of their respective programs. Participating programs were invited to enroll in the workshop based on the presence of their trainees in intensive care environments at Mayo Clinic with clinical responsibilities including CVC. We also invited a convenience sample of attending faculty to participate. All trainees participated in the workshop between May 2008 and June 2009. This study was approved by the Institutional Review Board, and all participants provided consent.One hundred and five subjects took part in the study. Sixty-seven residents, 25 fellows, and 13 attending physicians (from Anesthesiology, Surgery, Medicine, and Emergency Medicine Departments) consented to participate (Table 1).

Table Graphic Jump Location
Table 1 —Level of Training of Workshop Participants

CVC = central venous catheterization.

Central Line Workshop

The Mayo Clinic Multidisciplinary Simulation Center, with technical and statistical assistance from the Israel Center for Medical Simulation, designed and implemented a simulation-based central line workshop for training clinicians to institutional standards for the placement of central lines in ICU environments. By integrating proven mechanisms of risk reduction in CVC (eg, use of ultrasound23 and maximal barrier precautions24), coupled with best practice institutional and Institute for Healthcare Improvement recommendations,25 the workshop aimed to accelerate the learning curve of novice performers of CVC. The workshop was conducted in the Mayo Clinic Multidisciplinary Simulation Center and the Procedural Skills Laboratory of the Mayo Clinic Department of Anatomy. This course (Fig 1) used experiential learning and assessment techniques designed to improve and standardize learners’ performance to safety-defined proficiency standards set by Mayo experts in ICU education and clinical practice.

Figure Jump LinkFigure 1. Workflow of the Central Line Workshop. Learners performed workshop prerequisites to achieve defined cognitive mastery standards prior to participation in an experiential (simulation) component of the Central Line Workshop. Supported by discreet audio video documentation, the Central Venous Catheterization (CVC) Proficiency Scale assessment was conducted as part of the Certification Station.Grahic Jump Location

To permit the workshop to focus on skill development, all attendees completed several activities in advance of the workshop to develop baseline cognitive knowledge. This included: (1) conducting an online literature review; (2) reviewing 30 video-based clinical case presentations with radiographic correlates depicting complications of CVC placement; (3) viewing video depictions of key procedural components produced by recognized local experts; and (4) completing an open-book online test of knowledge of CVC risks, indications, and safety precautions with a minimum passing score of 90%.

After a brief review of learning goals and course logistics, participants rotated through several experience stations. At station 1, in an anatomy laboratory, learners reviewed procedure-specific gross anatomy using skeletons and prosected embalmed cadavers. Central venous cannulation and universal precaution techniques were reviewed on an unembalmed cadaver that was specifically prepared for this exercise. Arterial and venous systems were pressurized with colored fluids. Puncture of the vein (internal jugular [IJ] or subclavian [SC]) or artery (common carotid or SC) allowed the participant to aspirate blue or red fluid respectively. At stations 2 and 3, in the simulation center, dynamic ultrasound technique was reviewed and hands-on experience undertaken on a central venous access trainer (Blue Phantom; Advanced Medical Technologies; Kirkland, WA). Participants also practiced transduction, zeroing, trouble-shooting, ultrasonography/image acquisition, and gowning/gloving/draping techniques.

After completion of the instructional portion of the workshop, participants proceeded to a scenario-based “Certification Station” in which they performed IJ and SC catheterization on a central venous access trainer and performance was graded using the CVC Proficiency Scale (described in detail following). Learners who failed to achieve a passing score were given specific feedback and repeated the station until the predefined proficiency standards were attained. Each Certification Station procedure was individually recorded using three cameras, each with a different view. Camera views were integrated via digital encoding to a four-quadrant display; a fourth time-synchronized dedicated image depicted the ultrasound imaging. Following participant completion of the Certification Station, faculty reviewed performance with each individual, including the reflective and remediation plan, as required. A question-and-answer session concluded the workshop.

Comparison Group Protocol

To permit comparison of CVC Proficiency Scale scores across various levels of experience, a convenience sample of Mayo faculty also participated in the workshop. They performed a CVC in the same Certification Station, with the same recording procedure.

CVC Proficiency Scale

We developed the CVC Proficiency Scale based on the Institute for Healthcare Improvement Central Line Bundle25 and recommendations from a panel of simulation-based education and practice experts at our institution. Group consensus was reached regarding those items deemed important to clinical practice and capable of being trained and tested in the simulation center environment. This resulted in 15 dichotomous CVC Proficiency Scale items (Table 2). Composite scores were calculated as the percentage of items performed correctly. We developed and iteratively revised operational definitions of positive CVC Proficiency Scale parameters. Criteria were accepted upon reviews by the Mayo Clinic Simulation Standards Ad Hoc Committee, the Mayo Clinic Critical Care Education Committee, and the Mayo Clinic Institutional Critical Care Committee.

Table Graphic Jump Location
Table 2 —Item-Level Performance During Certification Station

Each participant was rated as performing/not performing each action correctly. Correct performance of each step contributed one point to the composite score. ID = identification; IJ = internal jugular; SC = subclavian. See Table 1 for expansion of other abbreviations.

a 

k not defined for this item because rater codes did not form a 2 3 2 square; raters agreed on 38/40 observations (raw agreement 95%) for this item.

b 

Assessment stopped early (prior to SC attempt) for two observations; thus, performance assessed for only 38 trainees and k calculated using a subset of 38 observations for this item.

Other Measurements

We also recorded the total number of IJ or SC venipuncture attempts (as defined by forward/backward motion after needle insertion) and number of skin entries. We measured procedure time from the initial greeting of the “patient” until successful IJ or SC catheterization. Raters had no knowledge of participants’ experience in CVC prior to the workshop. Before the start of the workshop, each participant reported the number of CVCs he or she had performed within the preceding 2 years. We also collected data regarding previous simulation training experience and years of clinical practice, and general demographic information.

Data Analysis

We grouped participants according to self-reported experience in placing CVCs: low (0-1 CVC), moderate (2-49 CVCs), and high (> 49 CVCs). Composite scores, procedure time, and number of venipuncture attempts were compared among groups using the Kruskal-Wallis test, followed by pairwise Wilcoxon rank sum tests for significant models. Two-sided 5% type 1 error was used to determine statistical significance. For analysis of secondary outcomes (time and attempts; six analyses), we adjusted the a level using Bonferroni method (revised a = 0.008). We determined the internal consistency of composite scores using Cronbach a and evaluated interrater agreement for individual items and for the composite score using k. Statistical analyses were performed using JMP 7.0 and SAS 9.1 (SAS Institute Inc.; Cary, NC).

Reliability

Performance was graded independently by either of two investigators (Y. D. or H. S.) using the CVC Proficiency Scale. Forty of 105 videos were reviewed by both raters. Final composite scores demonstrated high internal consistency (Cronbach a 0.81). Interrater agreement was substantial26 for composite scores (k 0.71) and for nearly all items individually (Table 2).

Relations to Other Variables

To explore validity evidence of relations to other variables, we compared CVC Proficiency Scale scores across experience levels. As shown in Figure 2, composite scores varied according to trainee experience, with means, medians (range) of 85%, 87% (47%-100%) for low; 88%, 87% (60%-100%) for moderate; and 94%, 93% (73%-100%) for high experience (Kruskal-Wallis P = .02). Pairwise comparisons among subgroups revealed significant differences between low and high experience (P = .02) and between moderate and high experience (P = .006). Performance data for the individual items are shown in Table 2. Participants with high experience showed performance superior to those with low or moderate experience for nearly all items. These differences were most pronounced for maximal barrier precautions, appropriate hand hygiene, securing the catheter, and SC venipuncture.

Figure Jump LinkFigure 2. Composite score stratifies groups. Box and whisker plot of composite score for trainees with low, moderate, and high experience with central line catheterization in the preceding 2 years. The box and whiskers demonstrate median and 25th and 75th percentiles of the CVC Proficiency Scale composite score. Initial analysis using the Kruskal-Wallis test indicated a significant difference among groups (P = .02). P values indicate the results of follow-up pairwise Wilcoxon rank sum comparisons. CL = central line. See Figure 1 legend for expansion of other abbreviation.Grahic Jump Location

In secondary analysis, we also compared the time and number of attempts required for successful vein cannulation. Times improved progressively for each level of experience (Table 3). These differences were statistically significant for all comparisons among groups. However, there were no statistically significant differences among groups when comparing the total number of IJ or SC venipuncture attempts and number of skin entries.

Table Graphic Jump Location
Table 3 —Comparison of Procedure Time and Number of Procedure Attempts by Procedural Experience

Data are given as median (range). Time was measured from when the physician first greeted the “patient” until successful cannulation. P values represent results of Kruskal-Wallis tests across all experience groups. Pairwise comparisons were performed only for outcomes with significant Kruskal-Wallis tests. See Tables 1 and 2 for expansion of abbreviations.

a 

Pairwise comparison: low vs moderate P = .01; moderate vs high P = .003; low vs high P = .0006.

b 

Pairwise comparison: low vs moderate P = .01; moderate vs high P = .002; low vs high P < .0001.

Our study confirms that simulation-based assessment in a zero-risk environment can assist in discerning different levels of skill for individuals placing CVCs. Demonstrated proficiency stratification via simulation methodologies is feasible for clinicians placing CVCs in a nonclinical, zero-risk simulation environment. This study provides evidence to support the use of the CVC Proficiency Scale to assess trainee competence in the context of a central line workshop.

The primary goal of this study was to develop and test a checklist-based tool for assessing the proficiency of clinicians placing CVCs. The composite score assessed physicians’ performance according to clinical skill levels defined by experience. Time to finish IJ catheterization also varied significantly by experience.

The traditional (subjective) assessment of resident performance may be inaccurate in the absence of an objective metric.27 However, the assessment of novice trainees in clinical CVC skills may expose patients to unnecessary risks. Studies show higher complication rates associated with relative lack of experience in CVC placement.1,4,5 Simulation presents a zero-risk environment for both training and assessment.28-30 A recent study detailed the development of the items and pass score for a simulation-based assessment of CVC proficiency.19 The present investigation proposes a new scale, with similarly rigorous item development, that includes items related to the increasingly common practice of ultrasound-guided CVC placement. We also improve on that study by demonstrating high score reliability and the ability to discriminate levels of trainee experience in assessing CVC proficiency. This study adds to the evidence suggesting that simulation-based process improvement mechanisms can enhance patient safety and quality of care.31-34

Our results with the checklist-based assessment tool reveal various levels of clinician performance, consistent with a traditional performance learning curve. The data are in accordance with studies that note a correlation between numbers of procedures performed and lower incidence of preventable errors complicating other procedures.4,5 Thus, as has been shown for other procedures, age or experience alone is an insufficient parameter of assessing skill proficiency.35,36 This validated measurement tool can facilitate future high-stakes performance assessment in CVC placement. The need for safe assessment of technical skills is indeed universal. When risk is dependent on the level of the training, peril is imposed by a traditional training paradigm that involves practicing on the patient. Validated systems of simulation-based assessment offer promise and inherent benefits in this context. Figure 3 represents a stylized model of a performance learning curve of an individual over time. The simulation-enhanced learning environment offers twofold benefits when scientifically applied with adequate rigor: While providing the risk-free training environment, simulation-enhanced training may both accelerate the learning curve and facilitate introduction of patients at a safer level of performance.

Figure Jump LinkFigure 3. Skill acquisition curve: impact of zero-risk training. This plot represents skill acquisition vs time for a learner of CVC placement (or other clinical procedure skills). Simulation-enhanced training provides opportunity for skill acquisition. Plot a represents the natural progression of skill for an individual learning through traditional methods (on patients). Plot b (dotted line) represents an acceleration of learning that would be expected through simulation-based training, such as the described Central Line Workshop. Metric assessment (eg, composite score) is depicted on the y-axis. Point A represents initiation of training for a novice learner. Following adequate training, the plateau phase is reached (B). Skill decay occurs unless reinforcement of skills is effected by additional experience or realistic simulation (small arrows,*). A primary goal of the composite score described is to assist in the definition of individuals within this conceptual curve while in the zero-risk environment, facilitating each learner reaching above the level of “safety standard.” Additional bedside training and assessment is subsequently performed (D) to assure both safety and clinical competence. The shaded portion represents the opportunity for morbidity and mortality reduction through accelerated acquisition and decelerated decline of skills (large arrows).Grahic Jump Location

This study has several strengths. We confirmed hypothesized relationships between instrument scores and key variables, and also presented evidence of valid content and acceptable reliability. The study enrolled learners from several specialties and had a sample size sufficient to confirm anticipated associations across different disciplines.

This study also has limitations. We do not know how this instrument would perform in other contexts, such as workshops using different instructional techniques, no workshop, or different institutions. We did not assess trainee performance with real patients and thus cannot comment on how these scores translate to clinical contexts. We reviewed videotaped procedures and do not know how this instrument might perform in a real-time simulation setting. Finally, we acknowledge that other methods to determine the composite score could be proposed. However, we felt that an unweighted sum would provide the best measurement in initial study and plan to explore alternate scoring rubrics in future research.

Although a passing score does not imply the ability to perform the procedure independently, it provides a metric by which educators can ensure a minimum level of proficiency before allowing trainees to perform such procedures on patients under supervision. The education and assessment tool of simulation can facilitate fair, reproducible, objective measures of performance, which, in turn, reflect a necessary first step in promoting patient safety. The impact of simulation assessments, if properly developed and evaluated, on the safe and scientific delivery of health care could be profound. Future research might explore the use of this instrument in other contexts (including live rather than videotaped procedures), by other raters, and using different scoring rubrics. Future work might also examine other components of the competence continuum, including observable clinical performance and outcome metrics at the bedside.

Author contributions:Dr Dong: contributed to the study design, conduct of the study, data interpretation, and writing the manuscript and approved the final version.

Dr Suri: contributed to the study design, conduct of the study, data interpretation, and writing the manuscript and approved the final version.

Dr Cook: contributed to the study design, conduct of the study, data interpretation, and writing the manuscript and approved the final version.

Dr Kashani: contributed to the study design, conduct of the study, data interpretation, and writing the manuscript and approved the final version.

Dr Mullon: contributed to the study design, conduct of the study, data interpretation, and writing the manuscript and approved the final version.

Dr Enders: contributed to the study design, conduct of the study, data interpretation, and writing the manuscript and approved the final version.

Dr Rubin: contributed to the study design, conduct of the study, data interpretation, and writing the manuscript and approved the final version.

Dr Ziv: contributed to the study design, conduct of the study, data interpretation, and writing the manuscript and approved the final version.

Dr Dunn: contributed to the study design, conduct of the study, data interpretation, and writing the manuscript and approved the final version.

Financial/nonfinancial disclosures: The authors have reported to CHEST that no potential conflicts of interest exist with any companies/organizations whose products or services may be discussed in this article.

Other contributions: The authors thank Wojciech Pawlina, MD, Ognjen Gajic, MD, and Haim Berkenstadt, MD, for contributions to the workshop and this manuscript and Rachel Dunn for technical assistance in preparation of this article.

CVC

central venous catheterization

IJ

internal jugular

SC

subclavian

McGee DC, Gould MK. Preventing complications of central venous catheterization. N Engl J Med. 2003;34812:1123-1133. [CrossRef] [PubMed]
 
Merrer J, De Jonghe B, Golliot F, et al; French Catheter Study Group in Intensive Care French Catheter Study Group in Intensive Care Complications of femoral and subclavian venous catheterization in critically ill patients: a randomized controlled trial. JAMA. 2001;2866:700-707. [CrossRef] [PubMed]
 
Raad I, Darouiche R, Dupuis J, et al; The Texas Medical Center Catheter Study Group The Texas Medical Center Catheter Study Group Central venous catheters coated with minocycline and rifampin for the prevention of catheter-related colonization and bloodstream infections. A randomized, double-blind trial. Ann Intern Med. 1997;1274:267-274. [PubMed]
 
Fares LG II, Block PH, Feldman SD. Improved house staff results with subclavian cannulation. Am Surg. 1986;522:108-111. [PubMed]
 
Sznajder JI, Zveibil FR, Bitterman H, Weiner P, Bursztein S. Central vein catheterization. Failure and complication rates by three percutaneous approaches. Arch Intern Med. 1986;1462:259-261. [CrossRef] [PubMed]
 
Wayne DB, Didwania A, Feinglass J, Fudala MJ, Barsuk JH, McGaghie WC. Simulation-based education improves quality of care during cardiac arrest team responses at an academic teaching hospital: a case-control study. Chest. 2008;1331:56-61. [CrossRef] [PubMed]
 
Mayo PH, Hackney JE, Mueck JT, Ribaudo V, Schneider RF. Achieving house staff competence in emergency airway management: results of a teaching program using a computerized patient simulator. Crit Care Med. 2004;3212:2422-2427. [CrossRef] [PubMed]
 
Fletcher G, Flin R, McGeorge P, Glavin R, Maran N, Patey R. Anaesthetists’ non-technical skills (ANTS): evaluation of a behavioural marker system. Br J Anaesth. 2003;905:580-588. [CrossRef] [PubMed]
 
Malec JF, Torsher LC, Dunn WF, et al. The mayo high performance teamwork scale: reliability and validity for evaluating key crew resource management skills. Simul Healthc. 2007;21:4-10. [CrossRef] [PubMed]
 
Holcomb JB, Dumire RD, Crommett JW, et al. Evaluation of trauma team performance using an advanced human patient simulator for resuscitation training. J Trauma. 2002;526:1078-1085. [CrossRef] [PubMed]
 
Gaba DM, Howard SK, Flanagan B, Smith BE, Fish KJ, Botney R. Assessment of clinical performance during simulated crises using both technical and behavioral ratings. Anesthesiology. 1998;891:8-18. [CrossRef] [PubMed]
 
Blum MG, Powers TW, Sundaresan S. Bronchoscopy simulator effectively prepares junior residents to competently perform basic clinical bronchoscopy. Ann Thorac Surg. 2004;781:287-291. [CrossRef] [PubMed]
 
Edmond CV Jr. Impact of the endoscopic sinus surgical simulator on operating room performance. Laryngoscope. 2002;1127 Pt 1:1148-1158. [CrossRef] [PubMed]
 
Fried GM, Feldman LS, Vassiliou MC, et al. Proving the value of simulation in laparoscopic surgery. Ann Surg. 2004;2403:518-525. [CrossRef] [PubMed]
 
Patel AD, Gallagher AG, Nicholson WJ, Cates CU. Learning curves and reliability measures for virtual reality simulation in the performance assessment of carotid angiography. J Am Coll Cardiol. 2006;479:1796-1802. [CrossRef] [PubMed]
 
Britt RC, Reed SF, Britt LD. Central line simulation: a new training algorithm. Am Surg. 2007;737:680-682. [PubMed]
 
Jensen AR, Sinanan MN. Using simulation-based training to improve clinical outcomes: central venous catheter placement as a model for programmed training. Semin Colon Rectal Surg. 2008;192:64-71. [CrossRef]
 
Britt RC, Novosel TJ, Britt LD, Sullivan M. The impact of central line simulation before the ICU experience. Am J Surg. 2009;1974:533-536. [CrossRef] [PubMed]
 
Huang GC, Newman LR, Schwartzstein RM, et al. Procedural competence in internal medicine residents: validity of a central venous catheter insertion assessment instrument. Acad Med. 2009;848:1127-1134. [CrossRef] [PubMed]
 
Messick S.Linn RL. Validity. Educational Measurement. 1988;3rd ed. Phoenix, AZ American Council on Education and Oryx Press:13
 
Standards for Educational and Psychological TestingStandards for Educational and Psychological Testing American Educational Research Association. 1999; Washington, DC American Psychological Association, National Council on Measurement in Education
 
Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: theory and application. Am J Med. 2006;1192:166.e7-e16. [CrossRef]
 
Hind D, Calvert N, McWilliams R, et al. Ultrasonic locating devices for central venous cannulation: meta-analysis. BMJ. 2003;3277411:361-364. [CrossRef] [PubMed]
 
O’Grady NP, Alexander M, Dellinger EP, et al. Guidelines for the prevention of intravascular catheter-related infections. Centers for Disease Control and Prevention. MMWR. Recommendations and Reports: Morbidity and Mortality Weekly Report. Recommendations and Reports/Centers for Disease Control. 2002;51RR-10:1-29
 
Implement the Central Line BundleImplement the Central Line BundleAccessed October 10, 2009 http://www.ihi.org/IHI/Topics/CriticalCare/IntensiveCare/Changes/ImplementtheCentralLineBundle.htm.
 
Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;331:159-174. [CrossRef] [PubMed]
 
Cox CE, Carson SS, Ely EW, et al. Effectiveness of medical resident education in mechanical ventilation. Am J Respir Crit Care Med. 2003;1671:32-38. [CrossRef] [PubMed]
 
Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health System. 2000; Washington, DC National Academies Press
 
Committee on Quality of Health Care in America IoMCommittee on Quality of Health Care in America IoM Crossing the Quality Chasm: A New Health System for the 21st Century. 2001; Washington, DC National Academies Press
 
Reid PP, Compton WD, Grossman JH, et al. Building a Better Delivery System: A New Engineering/Health Care Partnership. 2005; Washington, DC National Academy of Engineering
 
Issenberg SB, McGaghie WC, Hart IR, et al. Simulation technology for health care professional skills training and assessment. JAMA. 1999;2829:861-866. [CrossRef] [PubMed]
 
Gaba DM. The future vision of simulation in health care. Quality and Safety in Health Care. 2004;13Suppl1:i2-i20. [CrossRef] [PubMed]
 
Dunn W, Murphy JG. Simulation: about safety, not fantasy. Chest. 2008;1331:6-9. [CrossRef] [PubMed]
 
Weiss KB. Introductory remarks by the president of the American Board of Medical Specialties. Acad Emerg Med. 2008;1511:982-983. [CrossRef] [PubMed]
 
Westerman SJ, Davies DR. Acquisition and application of new technology skills: the influence of age. Occup Med (Lond). 2000;507:478-482. [CrossRef] [PubMed]
 
Friedman Z, Siddiqui N, Katznelson R, Devito I, Davies S. Experience is not enough: repeated breaches in epidural anesthesia aseptic technique by novice operators despite improved skill. Anesthesiology. 2008;1085:914-920. [CrossRef] [PubMed]
 

Figures

Figure Jump LinkFigure 1. Workflow of the Central Line Workshop. Learners performed workshop prerequisites to achieve defined cognitive mastery standards prior to participation in an experiential (simulation) component of the Central Line Workshop. Supported by discreet audio video documentation, the Central Venous Catheterization (CVC) Proficiency Scale assessment was conducted as part of the Certification Station.Grahic Jump Location
Figure Jump LinkFigure 2. Composite score stratifies groups. Box and whisker plot of composite score for trainees with low, moderate, and high experience with central line catheterization in the preceding 2 years. The box and whiskers demonstrate median and 25th and 75th percentiles of the CVC Proficiency Scale composite score. Initial analysis using the Kruskal-Wallis test indicated a significant difference among groups (P = .02). P values indicate the results of follow-up pairwise Wilcoxon rank sum comparisons. CL = central line. See Figure 1 legend for expansion of other abbreviation.Grahic Jump Location
Figure Jump LinkFigure 3. Skill acquisition curve: impact of zero-risk training. This plot represents skill acquisition vs time for a learner of CVC placement (or other clinical procedure skills). Simulation-enhanced training provides opportunity for skill acquisition. Plot a represents the natural progression of skill for an individual learning through traditional methods (on patients). Plot b (dotted line) represents an acceleration of learning that would be expected through simulation-based training, such as the described Central Line Workshop. Metric assessment (eg, composite score) is depicted on the y-axis. Point A represents initiation of training for a novice learner. Following adequate training, the plateau phase is reached (B). Skill decay occurs unless reinforcement of skills is effected by additional experience or realistic simulation (small arrows,*). A primary goal of the composite score described is to assist in the definition of individuals within this conceptual curve while in the zero-risk environment, facilitating each learner reaching above the level of “safety standard.” Additional bedside training and assessment is subsequently performed (D) to assure both safety and clinical competence. The shaded portion represents the opportunity for morbidity and mortality reduction through accelerated acquisition and decelerated decline of skills (large arrows).Grahic Jump Location

Tables

Table Graphic Jump Location
Table 1 —Level of Training of Workshop Participants

CVC = central venous catheterization.

Table Graphic Jump Location
Table 2 —Item-Level Performance During Certification Station

Each participant was rated as performing/not performing each action correctly. Correct performance of each step contributed one point to the composite score. ID = identification; IJ = internal jugular; SC = subclavian. See Table 1 for expansion of other abbreviations.

a 

k not defined for this item because rater codes did not form a 2 3 2 square; raters agreed on 38/40 observations (raw agreement 95%) for this item.

b 

Assessment stopped early (prior to SC attempt) for two observations; thus, performance assessed for only 38 trainees and k calculated using a subset of 38 observations for this item.

Table Graphic Jump Location
Table 3 —Comparison of Procedure Time and Number of Procedure Attempts by Procedural Experience

Data are given as median (range). Time was measured from when the physician first greeted the “patient” until successful cannulation. P values represent results of Kruskal-Wallis tests across all experience groups. Pairwise comparisons were performed only for outcomes with significant Kruskal-Wallis tests. See Tables 1 and 2 for expansion of abbreviations.

a 

Pairwise comparison: low vs moderate P = .01; moderate vs high P = .003; low vs high P = .0006.

b 

Pairwise comparison: low vs moderate P = .01; moderate vs high P = .002; low vs high P < .0001.

References

McGee DC, Gould MK. Preventing complications of central venous catheterization. N Engl J Med. 2003;34812:1123-1133. [CrossRef] [PubMed]
 
Merrer J, De Jonghe B, Golliot F, et al; French Catheter Study Group in Intensive Care French Catheter Study Group in Intensive Care Complications of femoral and subclavian venous catheterization in critically ill patients: a randomized controlled trial. JAMA. 2001;2866:700-707. [CrossRef] [PubMed]
 
Raad I, Darouiche R, Dupuis J, et al; The Texas Medical Center Catheter Study Group The Texas Medical Center Catheter Study Group Central venous catheters coated with minocycline and rifampin for the prevention of catheter-related colonization and bloodstream infections. A randomized, double-blind trial. Ann Intern Med. 1997;1274:267-274. [PubMed]
 
Fares LG II, Block PH, Feldman SD. Improved house staff results with subclavian cannulation. Am Surg. 1986;522:108-111. [PubMed]
 
Sznajder JI, Zveibil FR, Bitterman H, Weiner P, Bursztein S. Central vein catheterization. Failure and complication rates by three percutaneous approaches. Arch Intern Med. 1986;1462:259-261. [CrossRef] [PubMed]
 
Wayne DB, Didwania A, Feinglass J, Fudala MJ, Barsuk JH, McGaghie WC. Simulation-based education improves quality of care during cardiac arrest team responses at an academic teaching hospital: a case-control study. Chest. 2008;1331:56-61. [CrossRef] [PubMed]
 
Mayo PH, Hackney JE, Mueck JT, Ribaudo V, Schneider RF. Achieving house staff competence in emergency airway management: results of a teaching program using a computerized patient simulator. Crit Care Med. 2004;3212:2422-2427. [CrossRef] [PubMed]
 
Fletcher G, Flin R, McGeorge P, Glavin R, Maran N, Patey R. Anaesthetists’ non-technical skills (ANTS): evaluation of a behavioural marker system. Br J Anaesth. 2003;905:580-588. [CrossRef] [PubMed]
 
Malec JF, Torsher LC, Dunn WF, et al. The mayo high performance teamwork scale: reliability and validity for evaluating key crew resource management skills. Simul Healthc. 2007;21:4-10. [CrossRef] [PubMed]
 
Holcomb JB, Dumire RD, Crommett JW, et al. Evaluation of trauma team performance using an advanced human patient simulator for resuscitation training. J Trauma. 2002;526:1078-1085. [CrossRef] [PubMed]
 
Gaba DM, Howard SK, Flanagan B, Smith BE, Fish KJ, Botney R. Assessment of clinical performance during simulated crises using both technical and behavioral ratings. Anesthesiology. 1998;891:8-18. [CrossRef] [PubMed]
 
Blum MG, Powers TW, Sundaresan S. Bronchoscopy simulator effectively prepares junior residents to competently perform basic clinical bronchoscopy. Ann Thorac Surg. 2004;781:287-291. [CrossRef] [PubMed]
 
Edmond CV Jr. Impact of the endoscopic sinus surgical simulator on operating room performance. Laryngoscope. 2002;1127 Pt 1:1148-1158. [CrossRef] [PubMed]
 
Fried GM, Feldman LS, Vassiliou MC, et al. Proving the value of simulation in laparoscopic surgery. Ann Surg. 2004;2403:518-525. [CrossRef] [PubMed]
 
Patel AD, Gallagher AG, Nicholson WJ, Cates CU. Learning curves and reliability measures for virtual reality simulation in the performance assessment of carotid angiography. J Am Coll Cardiol. 2006;479:1796-1802. [CrossRef] [PubMed]
 
Britt RC, Reed SF, Britt LD. Central line simulation: a new training algorithm. Am Surg. 2007;737:680-682. [PubMed]
 
Jensen AR, Sinanan MN. Using simulation-based training to improve clinical outcomes: central venous catheter placement as a model for programmed training. Semin Colon Rectal Surg. 2008;192:64-71. [CrossRef]
 
Britt RC, Novosel TJ, Britt LD, Sullivan M. The impact of central line simulation before the ICU experience. Am J Surg. 2009;1974:533-536. [CrossRef] [PubMed]
 
Huang GC, Newman LR, Schwartzstein RM, et al. Procedural competence in internal medicine residents: validity of a central venous catheter insertion assessment instrument. Acad Med. 2009;848:1127-1134. [CrossRef] [PubMed]
 
Messick S.Linn RL. Validity. Educational Measurement. 1988;3rd ed. Phoenix, AZ American Council on Education and Oryx Press:13
 
Standards for Educational and Psychological TestingStandards for Educational and Psychological Testing American Educational Research Association. 1999; Washington, DC American Psychological Association, National Council on Measurement in Education
 
Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: theory and application. Am J Med. 2006;1192:166.e7-e16. [CrossRef]
 
Hind D, Calvert N, McWilliams R, et al. Ultrasonic locating devices for central venous cannulation: meta-analysis. BMJ. 2003;3277411:361-364. [CrossRef] [PubMed]
 
O’Grady NP, Alexander M, Dellinger EP, et al. Guidelines for the prevention of intravascular catheter-related infections. Centers for Disease Control and Prevention. MMWR. Recommendations and Reports: Morbidity and Mortality Weekly Report. Recommendations and Reports/Centers for Disease Control. 2002;51RR-10:1-29
 
Implement the Central Line BundleImplement the Central Line BundleAccessed October 10, 2009 http://www.ihi.org/IHI/Topics/CriticalCare/IntensiveCare/Changes/ImplementtheCentralLineBundle.htm.
 
Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;331:159-174. [CrossRef] [PubMed]
 
Cox CE, Carson SS, Ely EW, et al. Effectiveness of medical resident education in mechanical ventilation. Am J Respir Crit Care Med. 2003;1671:32-38. [CrossRef] [PubMed]
 
Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health System. 2000; Washington, DC National Academies Press
 
Committee on Quality of Health Care in America IoMCommittee on Quality of Health Care in America IoM Crossing the Quality Chasm: A New Health System for the 21st Century. 2001; Washington, DC National Academies Press
 
Reid PP, Compton WD, Grossman JH, et al. Building a Better Delivery System: A New Engineering/Health Care Partnership. 2005; Washington, DC National Academy of Engineering
 
Issenberg SB, McGaghie WC, Hart IR, et al. Simulation technology for health care professional skills training and assessment. JAMA. 1999;2829:861-866. [CrossRef] [PubMed]
 
Gaba DM. The future vision of simulation in health care. Quality and Safety in Health Care. 2004;13Suppl1:i2-i20. [CrossRef] [PubMed]
 
Dunn W, Murphy JG. Simulation: about safety, not fantasy. Chest. 2008;1331:6-9. [CrossRef] [PubMed]
 
Weiss KB. Introductory remarks by the president of the American Board of Medical Specialties. Acad Emerg Med. 2008;1511:982-983. [CrossRef] [PubMed]
 
Westerman SJ, Davies DR. Acquisition and application of new technology skills: the influence of age. Occup Med (Lond). 2000;507:478-482. [CrossRef] [PubMed]
 
Friedman Z, Siddiqui N, Katznelson R, Devito I, Davies S. Experience is not enough: repeated breaches in epidural anesthesia aseptic technique by novice operators despite improved skill. Anesthesiology. 2008;1085:914-920. [CrossRef] [PubMed]
 
NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Find Similar Articles
CHEST Journal Articles
PubMed Articles
  • CHEST Journal
    Print ISSN: 0012-3692
    Online ISSN: 1931-3543