In a recent webinar, Derek Dietze, MA, FACEHP, CCMEP, president, and Christopher Viereck, PhD, CCMEP, chief scientific officer, Improve CME LLC, outlined five key ways to make measuring the outcomes of continuing medical education activities successful and, if not easy, at least not a hassle that requires a staff biostatistician.
1. Plan ahead—way ahead.
- Have a detailed, descriptive outcomes assessment plan at the earliest stages of activity development or grant request.
- Develop assessment questions in concert with content, not as an afterthought.
- Look at the resources you have available to create assessment questions, implement the plan, and analyze/report the results. If you want to measure changes in performance in practice, or actual patient outcomes, do you know when, where, and how you’ll get that data? “Asking those questions gives you a reality check,” said Dietze.
2. Measure to your educational design.
- Your content, and your outcomes measures, will depend on where you expect changes to occur—knowledge/awareness, confidence, competence, performance, or patient outcomes.
- Use your organization’s mission to help guide your outcomes strategy.
- Be very specific about the level to which you plan to measure outcomes. Many use Moore Levels:
Moore’s Levels
Level 1 Participation
Level 2 Satisfaction
Level 3A Learning: Declarative Knowledge (Knows)
Level 3B Learning: Procedural Knowledge (Knows How)
Level 4 Learning: Competence (Shows How)
Level 5 Performance (Does)
Level 6 Patient Health
Level 7 Community Health
- Determine whether you need objective assessments, including chart pulls and quality assurance/quality improvement and patient safety data, which will require more resources but may be necessary for higher level outcomes—or if you can make do with subjective measures, such as self-reported performance change and observations of patient outcomes. Dietze said, “You can always measure beyond your design, but if your content is focused on enhancing knowledge and creating intentions to improve performance, measuring beyond that may not yield optimal results”
- Specify not just the outcome level you’re aiming for, but what that means to you, in your grant requests, said Viereck. For example, to measure Level 5 outcomes, are you going to use objective chart pulls or subjective self-reported data? Spell it out so there are no misunderstandings that could affect your credibility in the future.
3. Align, align, align.
- First determine what you want to change in your learners from your needs assessment/gap analysis work. Then choose objectives that are realistic for your setting, a design and format that will help you fill those gaps, and content that addresses the learning objectives that focus on the gaps. And you can craft outcomes questions that measure how those gaps have been filled, said Dietze.
- Consider using a chart so you can visualize the connections throughout the alignment process. It also helps to link each objective to a specific gap very early in the process, and to link every outcomes assessment question to a specific learning objective.
4. Keep the plan simple, specific, and realistic.
- Remember that outcomes objectives are not the same as learning objectives, said Dietze. An example could be to measure changes in knowledge and competence, and use those results to judge the degree to which you met the learning objectives so you can then use the results to improve future activities.
- Detail the practical methods you plan to use (QA/QI and patient safety data, or commitment-to-change in an evaluation at the end of the activity, or an audience-response system to ask pre- and post- knowledge, confidence, and competence questions). If you have a multi-component initiative, describe how you plan to assess each component. “Going through these details helps you understand the resources you will need,” said Dietze.
- Don’t be afraid to explain your statistical analysis plan. If you’re going to use descriptive statistics, lay out whether they’ll be percentages, numbers of participants, or mean scores. If you want to assess changes on your pre- and post-activity knowledge questions, describe the statistical tests you’re going to use, said Dietze.
- Detail when you plan to deliver what you promised. “A lot of people miss the ‘when,’” said Dietze. If you have a two-year online activity, will you produce your first report after two years, or will there be an interim report.
- How you will deliver the results report—will it be in PowerPoint? Word? How long will it be? Will it include an executive summary that crystallizes the results, something Dietze and Viereck highly recommend? And whom do you plan to share it with? Are there internal audiences—a CME committee, faculty, or learners—that could benefit from the outcomes data? Why not share your results in the clinical and CME communities?
5. Use multiple types of assessment questions.
- Use agreement scale questions, confidence questions, case questions—mix it up so you’re not putting all your eggs in one basket.
A specific outcomes plan helps project managers and CME coordinators implement what you have in mind, said Dietze. Ambiguous plans create misunderstanding, and overly ambitious plans may cause you to “over-promise” and “under- deliver.”
“Having a clear plan helps you make sure you have the internal and external resources lined up to do what you’re promising,” Dietze said.
Tips on Measuring Competence Change
• An easy way to objectively measure competence change: Pre- and post-activity case vignette questions.
• Don’t just ask if learners plan to change their practice. Ask them to check off a list of changes they intend to make in their practice that your content was designed to motivate them to change.
• Ask learners how often they currently do the evidence-based clinical practice strategies your content addressed, on a scale of 1 (never) to 5 (always). Then ask them how often they now plan to do them based on what they learned at the activity. “This gives you a measure of current frequency of use and planned frequency of use. When the frequency rating increases, it shows a change in competence—they intend to do something good more often,” said Dietze.