0
CME: ACCP Evidence-Based Educational Guidelines |

The Science of Continuing Medical Education: Terms, Tools, and Gaps: Effectiveness of Continuing Medical Education: American College of Chest Physicians Evidence-Based Educational Guidelines FREE TO VIEW

Dave Davis, MD; Georges Bordage, MD, PhD; COL Lisa K. Moores, MC, USA, FCCP; Nancy Bennett, PhD; Spyridon S. Marinopoulos, MD, MBA; Paul E. Mazmanian, PhD; Todd Dorman, MD; Douglas McCrory, MD
Author and Funding Information

*From the Association of American Medical Colleges (Dr. Davis), Washington, DC; the University of Illinois at Chicago (Dr. Bordage), Chicago, IL; the Uniformed Services University of the Health Sciences (COL Moores), Gaithersburg, MD; Harvard Medical School-Massachusetts General Hospital (Dr. Bennett), Brookline, MA; The Johns Hopkins University School of Medicine (Drs. Marinopoulos and Dorman), Baltimore, MD; Virginia Commonwealth University (Dr. Mazmanian), Richmond, VA; and the Duke Center for Clinic Health Resources (Dr. McCrory), Durham, NC.

Correspondence to: Dave Davis, MD, Continuing Health Care Education and Improvement, Association of American Medical Colleges, Washington, DC 20037-1127; e-mail: ddavis@aamc.org


Reproduction of this article is prohibited without written permission from the American College of Chest Physicians (www.chestjournal.org/misc/reprints.shtml).


Chest. 2009;135(3_suppl):8S-16S. doi:10.1378/chest.08-2513
Text Size: A A A
Published online

Background:  By its synthesis of a selected portion of the continuing medical education (CME) literature, the evidence-based practice center (EPC) review discovered several major issues in primary study design and in the systematic review process of CME studies. Through this process, the review speaks to the need for clarity in designing, reporting and synthesizing CME trials and provides an opportunity to advance the research agenda in this field.

Methods:  The evidence-based guideline (EBG) committee reviewed the methods section of the EPC report and these guidelines in detail, commenting on the search and review process and on the nature of the primary literature and the definitions used within it, comparing these to other published standardized measures.

Results:  Although the EBG committee noted much strength in the EPC review, limitations of the primary literature and the review methodology were identified and defined. These strengths and limitations hold implications for further research in this area.

Conclusions:  Noting these limitations and in order to move the field forward, the EBG committee proposes a standard nomenclature of terms in common use in CME; a more rigorous process of searching, distilling, and synthesizing the primary literature in this area; and a common format on which to base the development and description of future trials of CME interventions.

  • 1a. We suggest that the AMA definition of CME and the terms articulated in these guidelines (or their modifications) should be consistently employed by CME practitioners and researchers as a basis for the development and study of CME interventions.

  • 1b. We suggest widespread dissemination, elaboration, and clarification of these terms by journal editors, by professional societies, and by the research community.

  • 2. We suggest that increased funding be made available to CME research, enabling use of the most rigorous methods in original studies and systematic reviews. We recommend that such funding be carefully determined by the scope and precision of the research question in each case.

  • 3. We suggest that searches employ an information specialist and extend beyond the traditional medical educational literature to incorporate databases established to encompass CME's role in quality improvement, guideline utilization, managed care, business and organizational development, informatics, and other domains.

  • 4. We suggest that systematic review processes of CME interventions undertake rigorous efforts to ensure high levels of definitional agreement, independent data abstraction by more than one researcher, and assessment of interrater reliability.

  • 5. We suggest that systematic reviews of studies of CME interventions define and employ well-described and commonly agreed-on constructs of what constitutes positive, negative, and mixed outcomes. In this process, careful attention should be paid, where methodologically feasible, to questions of statistical, educational, and clinical significance and of the magnitude of the effect (eg, effect size, coefficient of determination).

  • 6. We suggest that standardized definitions, methods, and reporting structures be developed and used for future research, systematic reviews, and guidelines.

  • 7. We suggest that researchers explicitly consider the inclusion and documentation of teaching and learning principles in the design and implementation of further trials of CME. In addition, we suggest that, whenever possible, trials be designed to study the educational outcomes of such variables.

  • 8a. We suggest that comprehensive models of change, such as those developed in knowledge translation, be employed when studies of the effect of CME are undertaken in order to consider and assess the role of unaccounted and dependent variables.

  • 8b. We suggest that future studies of CME interventions incorporate full descriptions of elements expressed in the Continuing Healthcare Education Study Template.

  • 8c. We suggest that randomized controlled studies be performed with a clear definition of intervention and comparison or control groups, measure their effects at multiple points postintervention, and pay close attention to issues of participation and dropout.

  • 8d. We suggest that researchers consider the value of rigorous observational, ethnographic, and other qualitative study methods and use them either separately or in conjunction with quantitative methods and designs.

  • 9. We suggest that leaders in medical education and related fields foster (1) the identification of high-priority research topics in CME research that would span the broad scope of CME and (2) conduct of scientifically rigorous studies of the process and effectiveness of CME.

In preparing this systematic review and its subsequent report, the evidence-based guideline (EBG) committee discovered several elements in the continuing medical education (CME) research literature that, if addressed, would advance our understanding of the mechanisms by which CME works and the best means of studying them. The committee's effort was aided by a review of the methods and findings of the evidence-based practice center (EPC) report, in addition to input from the two principal investigators of the report (S.S.M. and T.D.). They used their knowledge of the details of the methods to help the committee prepare a rigorous critique of the report and to provide guidance for future researchers. The process of reflection, self-study, and improvement is the hallmark of any discipline1; it can inform the practice of CME and by doing so, can effect changes in practice performance and possibly health care outcomes.2

This article is divided into two sections. The first section addresses the process and findings of the EPC report, noting its strengths and weaknesses, and resultant limitations to the conclusions of the report. The second section attempts to look forward and to construct a more robust approach on which to build future research in CME.

Implications to the physician-learner and physician- teacher are significant. From a physician-learner perspective, CME has been and still is considered as a means to participate in short courses or conferences in order to fulfill a time-based credit system. In recent years, the American Medical Association (AMA) and other certification boards have broadened the types of CME activities that the physician can and should participate in. From a physician-teacher perspective, this will only lead to increases in the need for and quality of CME activities in the future, which will place more rigor and scientific demand on physician-teachers. Future research in CME is needed, especially that requiring a rigorous process of retrieving, extracting, and synthesizing the data from CME activities and the literature.

As noted elsewhere in this supplement, the heterogeneity of the studies and the lack of a clear understanding and description of the interventions limited interpretation of the data and subsequent recommendations. To increase the rigor of such studies and to permit more appropriate comparisons among them, we considered the methods and findings of the EPC in the following three areas: its use of nomenclature, its systematic review process, and the nature of the primary literature itself.

Taxonomy (the Language of CME)

There are several reasons for confusion in the application of the “language” of CME. First, the widely held perception of the acronym CME generally is one of the short course or conference and the credit systems that attend it. However, many definitions, such as that provided in the next paragraph by the AMA, encompass a much broader scope. Second, studies in physician learning and change and the context in which they occur have been enriched, although possibly made more complex, by the addition of scholarship from diverse disciplines. For example, educators might use the term outreach visits or distance education to describe education in remote site visits, whereas health services researchers might call the same intervention academic detailing. Third, new, parallel fields of study (eg, in health services research) have used terms like implementation tools to describe a wide variety of strategies (eg, reminders at the point of care, audit and feedback, among others) that some may consider CME techniques. These discrepancies lead us to call for a standard set of definitions in developing and comparing interventions and methods. These definitions attempt to describe interventions and techniques in CME; their intent is to be comprehensive, not exhaustive, and they are not meant to describe the learning process undertaken by physicians and others.

A Definition of CME

The AMA3 defines CME as all “educational activities that serve to maintain, develop, or increase the knowledge, skills, and professional performance and relationships a physician uses to provide services for patients, the public, or the profession.” Further, the AMA3 defines the content of CME as “that body of knowledge and skills generally recognized and accepted by the profession as within the basic medical sciences, the discipline of clinical medicine, and the provision of health care to the public.”

However, within this broad framework exists a wide variety of strategies, interventions, and techniques that require clarity of definition for the readers of these guidelines, including CME providers, physicians, policymakers, government regulators, funders, and others. In an attempt to develop a common language for those involved in the research and practice of CME and to set the stage for future research and understanding in this field, we provide here an array of definitions. To accomplish this goal, the EBG committee first reviewed the definitions used by the investigators at Johns Hopkins in preparing the EPC report. Where the committee agreed that these definitions were appropriate, they were maintained; where a definition appeared insufficient or confusing, we developed or used alternate definitions, borrowing heavily from existing texts and other resources.49 A listing of final proposed definitions is provided in Appendix 1 of the EPC report.

Instructional Models, Strategies, Methods, and Media: A Classification Framework

The EPC review used two terms that overlapped. The review defined media methods as the means by which information is conveyed (eg, live, Internet, print) and educational techniques or methods as strategies to be used within the context of media (eg, case discussion in a live presentation). The EBG committee searched for alternative classifications and definitions that might clarify these educational constructs. Three sources were useful in distinguishing instructional models, strategies, methods, and media when looking at instructional practices.46

Based on these three sources, we recommend the following framework and definitions:

  1. Instructional models represent “the broadest level of instructional practices and present philosophical orientation to instruction.”8 These can be divided into the following four families, based on how students learn: the social interaction family (eg, group investigation, social inquiry), the information-processing family (eg, inductive thinking, concept attainment, scientific inquiry), the personal family (eg, nondirective teaching, self-actualization), and the behavioral systems family (eg, mastery learning).9

  2. Instructional strategies represent “approaches a teacher may take to achieve learning objectives,”8 such as direct instruction, indirect instruction, interactive instruction, experiential learning, or independent study;

  3. Instructional methods (or techniques) represent the ways that teachers use “to create learning environments and to specify the nature of the activity in which the teacher and learner will be involved,” such as lectures (presentations, audioconferences), readings, discussions (seminars, small groups), tutorials, problem-solving exercises, practice sessions, self-instruction (programmed instruction), learning projects, cooperative group learning, games, simulations, laboratories, case studies, role playing, role modeling, demonstrations, audits (reviews), academic detailing, feedback or debriefing sessions, gap analyses (needs assessment as educational intervention), opinion leaders (educational influential), case-based or problem-based learning, mentoring (preceptorship, traineeship), workshops, train the trainers, or writing-authoring. “While particular methods are often associated with certain strategies, some methods may be found within a variety of strategies.”8

  4. Instructional materials and media represent “the materials that teachers use to teach and students use to learn,”10 such as text (printed or digital), speech (audiotapes or compact discs), images (pictures, cards, videotapes, or compact discs), persons (real or simulated patients), audience-response systems, or computers and the Internet (online and offline, Webcast).

The design of effective instruction is based on the educational needs and goals of the learners that guide the particular mix of instructional models, strategies, methods, and media used. Accordingly, the EBG committee followed the definition of the EPC and construed the term multimethod as any combination of individual forms of instructional methods (eg, case-based simulation with discussion and opinion leaders) and multimedia as the combination of individual forms of instructional media (eg, text, speech, images).

Specific CME Terms

In addition to broad classifications, the EBG committee noted several definitions that deserve clarification. First, some terms in the report were constructed too narrowly (eg, audio referred to the use of audiotapes alone); too broad (eg, handheld referred to both laminated cards and personal data assistants, which are quite different media in terms of capacity and use); or tautological (ie, using the same word [eg, attitude] to define itself). Second, simulations were considered as separate entities in the EPC review; the EBG committee considers this instructional method integral to the CME process and is presented as such in the definitions. Finally, the committee considered a broader understanding of the term outcomes than those described in the EPC report (methodologic issues are detailed later). For example, Kern et al7 distinguish the following three types of objectives that could be seen here as end points in the CME process: learner, process, and outcome. In this sense, process may refer to practice or performance change and outcome to clinical, patient end points or measures, the final construct of which also may include families and population. Other taxonomies of outcome exist as well. Miller, Dixon, and Kirkpatrick (in the QUOROM statement)12 each refer to knowledge, competence, and performance as outcomes, whereas Bloom's taxonomy of cognitive, affective, and psychomotor measures of learning also may prove useful in determining the outcomes of educational interventions.

Specific definitions are listed in Appendix 1 of the EPC report, and an attempt to clarify the description of outcomes is made in the Continuing Healthcare Education Study Template (CHEST), which is described later. We used existing definitions whenever possible, acknowledging that there is some overlap among them.

Guideline Panel Suggestions

  • 1a. We suggest that the AMA definition of CME and the terms articulated in these guidelines (or their modifications) be consistently employed by CME practitioners and researchers as a basis for the development and study of CME interventions.

  • 1b. We suggest widespread dissemination, elaboration, and clarification of these terms by journal editors, by professional societies, and by the research community.

The Process of Reviewing and Synthesizing the CME Literature

Research in CME and related areas will require rigorous primary studies (the subject of the last section in this article) and an equally rigorous process of retrieving, extracting, and synthesizing data from this literature. The guideline briefly summarizes the major steps undertaken in this review, touching on some examples of best practice and on possible weaknesses and limitations, and suggests ways in which the literature and its retrieval and synthesis could be improved. The Methodology article describes the methods the EPC employs in detail. In short, the review undertook the following three major steps: identifying the primary and systematic review literature, selecting studies for review, and abstracting data and assessing study quality.

Comment: Strengths and Limits of the EPC Report

In preparing the EPC report, a sizable amount of literature was retrieved and reviewed with a relatively high degree of consistency, standardization, and rigor. The EPC report and process expanded on previous reviews of CME effectiveness and allowed for the formulation of an EBG, incorporating recommendations by the American College of Chest Physicians guideline committee. The EBG committee noted several particular strengths of the report and its process in addition to the standard process of most systematic reviews in this area. The committee remarked on the efforts of the EPC to rate study quality, basing it on the criteria of Jadad et al.8 In addition, the attempt to characterize and capture features of adult learning as features of CME studies are unique and noteworthy, as were its efforts in the area of attempting to document the “dosing” of the CME intervention—its duration, frequency, and intensity. Further, the EPC's attempt to model the way in which CME works is useful and beyond the work of most systematic reviews and contributes sizably to the field. Finally, the EBG committee endorsed the inclusion of a panel of CME experts who advised on the overall process of the EPC's review.

Despite its many strengths, the EPC report contains several limitations that preclude generalizing its findings. In articulating these limitations, we note the EPC's assessment that the wide scope, limited resources, and strict timelines in this review necessitated “hard methodologic choices” regarding the scope of the key questions and extent and nature of the initial search strategy.

The Literature Base:

First, the EBG committee noted that the CME literature, especially as it applies to quality improvement, cost-effectiveness, and other dimensions of its place in healthcare, occupies terrain not always identified by the search strategies outlined in this methodology, and the EPC could have employed more comprehensive databases already created for this purpose. First, the University of Toronto's Research and Development Resources Base in CME9 and the Best Evidence Medical Education Collaboration10 could provide substantial contributions in future systematic reviews. Second, restricting the search to the United States and Canada excludes a sizable body of research generated in other countries with very similar CME and training requirements, including the United Kingdom, Australia, New Zealand, and the Netherlands. Third, the formal US CME provider accreditation process regulates the provider but does not address the manner, methods, or instructional techniques of education, the subject of this report. Fourth, nonrandomized trials with comparator groups were included, allowing volunteer bias that may skew results. Fifth, limiting the search to those studies involving 15 or more physicians excludes studies with fewer subjects but adequate statistical power, as in small group or individualized learning; these are important, but often-neglected areas of CME. Finally, the EPC did not include specific search terms such as clinical practice guideline, performance practice, and quality improvement.

Data Extraction:

As discussed, the process of data retrieval and documentation may have been affected by the lack of definitional agreement in the primary literature, making the data abstraction process difficult. For example, the EPC review assessed data about duration and frequency of exposure to CME activity, but lack of standardization in the reporting of such data precluded meaningful interpretation. Further, given that the review process was sequential, no efforts could be made to measure and achieve high degrees of interrater reliability between data abstracters and reviewers. The sequential review may have introduced a subjective element into the review of the data, with the potential for variability in data abstraction and interpretation compared to an independent, double-review process.

Study Outcomes:

The committee was concerned about first the extent to which the EPC was able to describe and document study outcomes. The differentiation between primary and secondary study objectives and educational or learning objectives lacked clarity. Although most differences may have been minimal, these may have rendered data abstraction less precise and permitted the possibility that achievement of study objectives of low importance (eg, a secondary study objective) could trump the failure to achieve other objectives of greater clinical or educational significance. Second, the EPC's determination of the degree to which studies met (or failed to meet) their objectives lacked clarity. Studies were determined to have met their objectives by a simple answer of “yes” when most study objectives were met and “no” when not met (see Methods7a). The word mixed was used to describe an intermediate state, and the word unclear was used when the study design did not permit interpretation. In some instances, the primary literature made interpretation difficult. Third, in some cases of self-reports (19 of 135, or 14%), performance outcomes susceptible to subjective interpretation by the respondent,11 were counted as measures of performance. In this context, we suggest that to assure unbiased assessment, reviews of CME assessing physician performance use only objective measures. Finally, direct comparisons among many studies that used similar interventions and statistical measures were possible. Here, the determination of effect size, where feasible, would have contributed to the body of knowledge about CME effectiveness. Taken together, these strengths and limitations of the review process are important to consider in the interpretation of the EPC report and lead to recommendations regarding further research.

Research Recommendations

  • 2. We suggest that increased funding be made available to CME research, enabling use of the most rigorous methods in original studies and systematic reviews. We recommend that such funding be carefully determined by the scope and precision of the research question in each case.

  • 3. We suggest that searches employ an information specialist and extend beyond the traditional medical educational literature to incorporate databases established to encompass CME's role in quality improvement, guideline utilization, managed care, business and organizational development, informatics, and other domains.

  • 4. We suggest that systematic review processes of CME interventions undertake rigorous efforts to ensure high levels of definitional agreement, independent data abstraction by more than one reviewer, and assessment of interrater reliability.

  • 5. We suggest that systematic reviews of studies of CME interventions define and employ well-described and commonly agreed-on constructs of what constitutes positive, negative, and mixed outcomes. In this process, careful attention should be paid, where methodologically feasible, to questions of statistical, educational, and clinical significance and of the magnitude of the effect (eg, effect size, coefficient of determination).

Quality Issues in the Primary and Systematic Review Literature
Quality of the Primary and Systematic Review Literature:

This EPC review included considerations of study quality, clearer articulations of which may inform future research in this area. Quality assessment of trials was based on the criteria of Jadad et al,8 noting that because participants in a study of an educational intervention cannot be blinded to the intervention, trials were assessed for evidence that the outcomes evaluation was blinded. Further, the EPC review reported on the validity and reliability of the methods used to measure the effects of CME relative to its selected outcomes measures. The quality of each systematic review was assessed using a tool derived from the main elements of the QUOROM statement12 as a basis, with the addition of questions regarding assessment of publication bias.

Discussion:

Analysis of the effect of a discipline requires a clear and consistent methodology applied at the primary research level and a rigorous process of searching, retrieval, distillation, and synthesis of this primary literature. A clear statement of the expectations of scholars and other experts about the nature and description of primary research in this area is essential, much as in the development and reporting of clinical practice guidelines.13 We applaud the EPC's attempt to move the science of CME forward by quantifying exposure to educational activities regarding duration and frequency and by determining and using strict inclusion criteria for systematic review. The EBG committee acknowledges the limitations of synthesizing the CME literature, given the heterogeneous nature of the primary studies and their differing audiences, methods, and content areas. Further, we concur with the EPC's assessment of the low quality of study designs, at least as they were defined by this process. Factors that lead to these low-quality studies include variable reporting of details, dearth of models or conceptual frameworks, critical reviews of past studies on which to base research, and the lack of valid and reliable evaluation tools. Moreover, the EPC review reveals a relatively nonstandardized approach to CME research in general.

Use and Reporting of Learning Principles:

Finally, the EPC attempted to analyze the studies according to their attention to some of the principles of adult learning, elsewhere called adult education. These principles included enabling learners to be active contributors to their learning, relating to learners' current work or life experiences, tailoring curricula to learners' current or past experiences, allowing learners to identify their own learning goals and direct their education, allowing them to practice what they learned in simulated activities, providing support to self-directed learners, receiving feedback from teachers or peers during active learning, allowing learners to reflect on their learning and to observe faculty role modeling. In general, however, this analysis was lacking or was not reported.

Guideline Committee Suggestions

  • 6. We suggest that standardized definitions, methods, and reporting structures be developed and used for future research, systematic reviews, and guidelines. (The EBG committee has attempted to outline an early such construct in section 2, research agenda.)

  • 7. We suggest that researchers explicitly consider the inclusion and documentation of teaching and learning principles in the design and reporting of further trials of CME. In addition, we suggest that, whenever possible, trials be designed to study the educational outcomes of such variables.

Despite its limitations, recommendations in this guideline were made based on the EPC review with some confidence. In contrast, there were many other areas in which the small number and heterogeneity of available studies precluded reaching definitive conclusions regarding the influence of particular factors on the effectiveness of CME. In this section, a broad framework is provided on which further research can occur, describing a theoretical model of research based on the EPC report. We recognize that such a model is an option but believe that this framework may suffice to characterize research elements and to describe their interrelationship. Further, we attempt to describe a template for designing and reporting trials of CME interventions as a basis for discussion.

A Framework on Which To Model Further CME Research

One framework guided the work of the EPC itself. The EBG committee, using this model and others,1921 attempted to develop a framework for further research. Such a framework, frequently referred to as knowledge translation, would encourage investigators to address broad areas or variables (eg, the learner) as well as issues within each variable (eg, age, career stage, specialty, gender). Further, such a platform would enable the testing of the ways in which the variables interact and advance the CME research agenda.

In addition to considering the learner, the educational intervention, the practice, and external environment, we consider the Rogers22 construct of innovation (eg, in this case, new knowledge or skills) to be important. The following characteristics must be considered as innovations are adopted: the nature of the evidence behind the information or change; its complexity; whether it can be observed and tested; the advantage of obtaining this information; and the size of the change (ie, small changes, such as a minor adjustment to a method or clinical strategy; large changes requiring a major shift in knowledge or competence, such as acquiring a brand-new manual skill).

We propose that a similar framework be employed whenever complex CME studies of effect are undertaken. To enable this process, the committee described the components of each category, outlined in checklist format, in Table 1 (CHEST). In order to frame the research agenda, the guideline committee suggested a number of questions that pertain to each variable. Finally, there are many overarching questions, among them the interplay among the following variables.

Table Graphic Jump Location
Table 1 The CHEST Framework
The CME Activity or Intervention:

Do the attributes and credibility of the educational provider have an effect on learning and change? What are the variable effects of the CME medium, method, and technique used? What are the effects of the setting of the CME activity on learning? What are the effects of the duration, intensity, and frequency of the intervention or by its attention to adherence to theories and principles of learning?

The Learner:

What are the effects of age, training, specialty, practice setting, and manner of reimbursement on learning and change?

External Variables:

What are the effects of external regulations, incentives, disincentives, public or patient demand, practice setting, and the presence of teams on learning, performance, and health outcomes? What are the effects of physician payment systems, including pay for performance? What are the effects of access to information technology resources at the point of care?

The Size, Nature, and Related Characteristics of the Change:

What are the effects of such elements as cost, ability to be observed and tested, complexity, level of the innovation or information conveyed by the CME activity?

Variables Within the Framework: The CHEST Model for Undertaking and Reporting CME Research

The EBG committee's review of the EPC report leads to a call for further clarity about the nature and elements of CME research. In this effort, the committee developed the CHEST. The template attempts to help ensure that researchers attend to and report the details, among others, of the appropriateness and use of study methods; the participants, including their settings, workload, and other practice considerations; intervention media, methods, and techniques; outcome measures, units of analysis, and allocation; follow-up of participants; the quality, blindedness, and reliability of evaluation measures; protection against contamination; ethical approval; and targeted clinical behavior and complexity of change. We expect that further deliberation (see recommendation 11–8) will lead to statements of optimal study design, such as those created for other purposes (eg, QUOROM12); the development of guidelines in this area (eg, the Appraisal of Guidelines Research and Education guideline appraisal instrument14); and reporting of the strength of findings (eg, Scottish Intercollegiate Guidelines Network,15 Grading of Recommendations Assessment, Development, and Evaluation16). For the present, the committee proposes the CHEST checklist as a beginning to add rigor to study design and reporting.

Research Methods

Designing a proper study requires considering the complexity and interplay of variables measured quantitatively, the exclusion of nonquantitative methods, and the use of randomized control trial methodologies, while at the same time understanding the limitations of these methods. For example, the EBG committee noted the natural focus on measurable end points, such as prescribing or test ordering, and their lack of ability to describe decision making or other thought and learning processes occurring in the minds and practices of physicians.

Although quantitative methods demonstrate usefulness and form a major part of the biomedical research model, they often do not describe the complexity of change and the change process. Here, qualitative methods such as focus groups, interviews, chart-stimulated recall,17 and other methods may generate rich and explanatory data. Further, such study methods may themselves be subject to a form of metaanalysis.18 Finally, we strongly recommend that future research undertakes cost-effectiveness and cost-benefit analyses.

Research Recommendations
Methods

  • 8a. We suggest that comprehensive models of change, such as those developed in knowledge translation, be employed when studies of the effect of CME are undertaken in order to consider and assess the role of unaccounted and dependent variables.

  • 8b. We suggest that future studies of CME interventions incorporate full descriptions of elements expressed in the CHEST.

  • 8c. We suggest that randomized controlled studies be performed with a clear definition of intervention and comparison or control groups, measure their effects at multiple points postintervention, and pay close attention to issues of participation and dropout.

  • 8d. We suggest that researchers consider the value of rigorous observational, ethnographic, and other qualitative study methods and use them either separately or in conjunction with quantitative methods and designs.

Content

  • 9. We suggest that leaders in medical education and related fields foster (1) the identification of high-priority research topics in CME research that would span the broad scope of CME and (2) conduct of scientifically rigorous studies of the process and effectiveness of CME.

High-priority topics include clinical areas where there is a documented gap between best clinical evidence and current practice as well as areas of educational research need. The latter will require the development of strategies for further identifying the variables and prioritizing the gaps in our knowledge about CME.

AMA

American Medical Association

CHEST

Continuing Healthcare Education Study Template

EBG

evidence-based guideline

EPC

Evidence-based Practice Center

QUOROM

Quality of Reporting on Meta-Analysis

Dr. Davis has received grants from the University of Toronto for the redevelopment of a database and $100,000 in grants from the Ministry of Health, Ontario, Canada.

Dr. Bordage has no conflicts of interest to disclose.

COL Moores has no conflicts to disclose.

Dr. Bennett has served as a consultant with Reed Medical Education about CME; amount is under $2,000.

Dr. Marinopoulos received salary support from the Johns Hopkins EPC and the Agency for Healthcare Research and Quality for his role as co-principal investigator of the review of the effectiveness of CME. He also participated in the Genentech Independent Medical Education Advisory Board following the publication of the Agency for Healthcare Research and Quality Evidence Report.

Dr. Mazmanian has no conflicts of interest to disclose.

Dr. Dorman has no relevant financial conflicts to disclose. He previously served as a consultant to an incubator company, Electrocare, Inc., and once owned stock in Visicu, Inc., since bought by Phillips.

Dr. McCrory received salary support through his academic institution from both federal contracts and contracts with not-for-profit organizations since this report was completed. He has undertaken two consulting relationships unrelated to this report, including serving on a data-safety monitoring committee for a new device trial and as an expert witness in a lawsuit against a product made by Pfizer.

Whitehead AN. Aims of education and other essays. 1967; New York, NY Free Press
 
Fox RD. Using theory and research to shape the practice of continuing professional development. J Contin Educ Health Prof. 2000;20:238-246. [PubMed] [CrossRef]
 
American Medical Association Glossary of continuing medical education (CME) related organizations, committees, terms and credit programs. 2006; Chicago, IL American Medical Association
 
Saskatchewan Education Chapter 2: instructional models, strategies, methods and skills.Accessed January 9, 2009 Available at:http://www.sasked.gov.sk.ca/docs/policy/approach/instrapp03.html.
 
Joyce B, Weil M, Calhoun E. Models of teaching. 2004;7th ed. Boston, MA Allyn and Bacon
 
Rose DH, Meyer A. Chapter 3: why we need flexible instructional media: teaching every student in the digital age; universal design for learning. 2002; Wakefield, MA Center for Applied Special Technology
 
Kern DE. Curriculum development for medical education: a six-step approach. 1998; Baltimore, MD Johns Hopkins University Press
 
Marinopoulos SS, Baumann MH. Methods and definitions of terms: effectiveness of continuing medical education: American College of Chest Physicians evidence-based educational guidelines. Chest. 2009;135suppl:17S-28S. [PubMed]
 
Jadad AR, Moore RA, Carroll D, et al. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Control Clin Trials. 1996;17:1-12. [PubMed]
 
Research and Development Resource Base A searchable literature database for the health professions.Accessed January 9, 2009 Available at:http://128.100.115.20/.
 
BEME Collaboration Reviews.Accessed January 9, 2009 Available at:http://www.bemecollaboration.org/beme/pages/index.html.
 
Davis DA, Mazmanian PE, Fordis M, et al. Accuracy of physician self-assessment compared with observed measures of competence: a systematic review. JAMA. 2006;296:1094-1102. [PubMed]
 
Clarke M. The QUOROM statement. Lancet. 2000;355:756-757. [PubMed]
 
Davis D, Palda V, Drazin Y, et al. Assessing and scaling the knowledge pyramid: the good-guideline guide. CMAJ. 2006;174:337-338. [PubMed]
 
AGREE Collaboration Appraisal of guidelines research and evaluation (AGREE) instrument.Accessed January 9, 2009 Available at:http://www.agreecollaboration.org.
 
Harbour R, Miller J. A new system for grading recommendations in evidence based guidelines. BMJ. 2001;323:334-336. [PubMed]
 
Guyatt G, Gutterman D, Baumann MH, et al. Grading strength of recommendations and quality of evidence in clinical guidelines: report from an American College of Chest Physicians task force. Chest. 2006;129:174-181. [PubMed]
 
Goulet F, Jacques A, Gagnon R, et al. Assessment of family physicians' performance using patient charts: interrater reliability and concordance with chart-stimulated recall interview. Eval Health Prof. 2007;30:376-392. [PubMed]
 
Stroup DF, Berlin JA, Morton SC, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting; Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA. 2000;283:2008-2012. [PubMed]
 
Kaufman DM. Applying educational theory in practice. BMJ. 2003;326:213-216. [PubMed]
 
Mann KV. The role of educational theory in continuing medical education: has it helped us? J Contin Educ Health Prof. 2004;24suppl:S22-S30. [PubMed]
 
Davis D, Evans M, Jadad A, et al. The case for knowledge translation: shortening the journey from evidence to effect. BMJ. 2003;327:33-35. [PubMed]
 
Rogers EM. Diffusion of innovation. 2003;5th ed. New York, NY Free Press
 

Figures

Tables

Table Graphic Jump Location
Table 1 The CHEST Framework

References

Whitehead AN. Aims of education and other essays. 1967; New York, NY Free Press
 
Fox RD. Using theory and research to shape the practice of continuing professional development. J Contin Educ Health Prof. 2000;20:238-246. [PubMed] [CrossRef]
 
American Medical Association Glossary of continuing medical education (CME) related organizations, committees, terms and credit programs. 2006; Chicago, IL American Medical Association
 
Saskatchewan Education Chapter 2: instructional models, strategies, methods and skills.Accessed January 9, 2009 Available at:http://www.sasked.gov.sk.ca/docs/policy/approach/instrapp03.html.
 
Joyce B, Weil M, Calhoun E. Models of teaching. 2004;7th ed. Boston, MA Allyn and Bacon
 
Rose DH, Meyer A. Chapter 3: why we need flexible instructional media: teaching every student in the digital age; universal design for learning. 2002; Wakefield, MA Center for Applied Special Technology
 
Kern DE. Curriculum development for medical education: a six-step approach. 1998; Baltimore, MD Johns Hopkins University Press
 
Marinopoulos SS, Baumann MH. Methods and definitions of terms: effectiveness of continuing medical education: American College of Chest Physicians evidence-based educational guidelines. Chest. 2009;135suppl:17S-28S. [PubMed]
 
Jadad AR, Moore RA, Carroll D, et al. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Control Clin Trials. 1996;17:1-12. [PubMed]
 
Research and Development Resource Base A searchable literature database for the health professions.Accessed January 9, 2009 Available at:http://128.100.115.20/.
 
BEME Collaboration Reviews.Accessed January 9, 2009 Available at:http://www.bemecollaboration.org/beme/pages/index.html.
 
Davis DA, Mazmanian PE, Fordis M, et al. Accuracy of physician self-assessment compared with observed measures of competence: a systematic review. JAMA. 2006;296:1094-1102. [PubMed]
 
Clarke M. The QUOROM statement. Lancet. 2000;355:756-757. [PubMed]
 
Davis D, Palda V, Drazin Y, et al. Assessing and scaling the knowledge pyramid: the good-guideline guide. CMAJ. 2006;174:337-338. [PubMed]
 
AGREE Collaboration Appraisal of guidelines research and evaluation (AGREE) instrument.Accessed January 9, 2009 Available at:http://www.agreecollaboration.org.
 
Harbour R, Miller J. A new system for grading recommendations in evidence based guidelines. BMJ. 2001;323:334-336. [PubMed]
 
Guyatt G, Gutterman D, Baumann MH, et al. Grading strength of recommendations and quality of evidence in clinical guidelines: report from an American College of Chest Physicians task force. Chest. 2006;129:174-181. [PubMed]
 
Goulet F, Jacques A, Gagnon R, et al. Assessment of family physicians' performance using patient charts: interrater reliability and concordance with chart-stimulated recall interview. Eval Health Prof. 2007;30:376-392. [PubMed]
 
Stroup DF, Berlin JA, Morton SC, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting; Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA. 2000;283:2008-2012. [PubMed]
 
Kaufman DM. Applying educational theory in practice. BMJ. 2003;326:213-216. [PubMed]
 
Mann KV. The role of educational theory in continuing medical education: has it helped us? J Contin Educ Health Prof. 2004;24suppl:S22-S30. [PubMed]
 
Davis D, Evans M, Jadad A, et al. The case for knowledge translation: shortening the journey from evidence to effect. BMJ. 2003;327:33-35. [PubMed]
 
Rogers EM. Diffusion of innovation. 2003;5th ed. New York, NY Free Press
 
NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

CHEST Journal Articles
CHEST Collections
PubMed Articles
  • CHEST Journal
    Print ISSN: 0012-3692
    Online ISSN: 1931-3543