Organizers of the Critical Skills for the New CME Paradigm workshop, held May 11 to 12 at the Bellagio in Las Vegas, had two ambitious goals: to teach experienced CME professionals what they need to know to stay ahead in this rapidly evolving field; and to model the principles they were teaching so participants could experience adult education theory in action.

“We wanted to concentrate not only on the content, but also the process,” says Suzanne Murray, principal, AXDEV Global, Norfolk, Va., and president and principal, AXDEV Group Inc., Montreal.

It was an intense, packed two days, and not all of the participants were happy about being taken outside their comfort zones. Even though most were experienced CME providers well-versed in the theory of the “new CME paradigm,” many had not actually gone through a program that forced them to live it. But, as one participant said, “Seeing these adult education principles in action made me understand what is possible with CME activities, which are still mostly lectures. I may not be able to integrate everything into every activity, but now I really see that if these principles can work for CME providers, they can work for my physicians, too.”

Wet Baby Principle

George Mejicano, MD, MS, assistant dean and director of CME, associate professor of medicine, University of Wisconsin School of Medicine and Public Health, Madison; and Lorna Cochrane, PhD, vice president and principal, AXDEV Global, Norfolk, Va., led a session on adult learning principles, based on an extensive reading list participants were given beforehand.

“To be successful, a CME event has to be practical, doable at work, based on true needs, and something learners can use immediately when they go back to work,” said Cochrane. “The only people who like change are wet babies. Adult learners have to see how the change will benefit them and their patients.” Unlike children, adults also are not blank slates who suck in knowledge — we constantly are comparing what we learn to what we already know, so learning is most likely to result when the activity builds on our knowledge. “We are now transitioning from being ‘the sage on the stage' to the ‘guide on the side,’” said Cochrane. To embody this principle, the session leaders sat in director's chairs around the room and roved constantly among the participants, rather than pontificate from the podium.

But it wasn't always easy to translate the theory into practice, according to the workshop organizers at the University of Wisconsin School of Medicine and Public Health, Madison; AXDEV Global; and Steve Passin & Associates, Newtown Square, Pa. As Murray says, “When you look at adult learning principles in academia, you're working in a controlled environment, testing theories. We have a tendency to take a theory and say that's the golden rule, but it's not transferable to a real-world environment due to real-world constraints.” The organizers got resistance to some of what they wanted to do from the start, even at the faculty level, even on things like the room setup. Attendees who were asked to provide input ahead of time “were skeptical that we could achieve what we wanted to with half-moon tables instead of classroom style,” Murray said. Despite the resistance, one of the organizers' objectives was to show that the theories can work in the real world, even — or perhaps, especially — for experienced CME providers as they walked through the CME cycle components.

Discovering Docs' Real Needs

Accredited providers know that doing a thorough needs assessment is important, of course, but all too often people use the needs assessment just to confirm what they already want to do in a CME activity. As Sean Hayes, PsyD, vice president, performance optimization solutions, AXDEV Group, Norfolk, Va., said, “I recently was on the phone with a senior academic who said, ‘Why do we need a needs assessment? I've been doing this for years, and I know what they need.’” He added, “A lot of CME has been the tail wagging the dog.” The provider community has to move from just documenting the process to satisfy Accreditation Council for CME requirements to setting priorities that consider all the stakeholders' needs, and making decisions about program planning, resource allocation, and organizational improvement, he said. “Everything from program design and planning to implementation and evaluation depends on the needs assessment.”

So what is a good needs assessment? One that assesses the gap between actual patient care, what patient care should be, as informed by evidence-based medicine guidelines, key opinion leaders, clinical research — and what's actually doable in the trenches. Hayes said, “Most physicians have the knowledge they need but they don't know how to apply it in the seven minutes they have with the patients in their office. You need to use multiple sources to identify the gap, such as key informant interviews, surveys and questionnaires, and focus groups — and sometimes you'll find needs that don't have an educational solution.”

It's also important to draw the distinction between needs and wants. For example, many doctors believe they need dinner meetings, or that they want CME on an antibiotic that they're over-prescribing already. “Historically, we tended to focus on perceived needs,” said Murray. “Research and evidence shows that when a CME activity hits on both perceived and unperceived needs, it is more effective.”

But, said one participant, how do you convince course directors to do more in-depth needs assessments? “You have to collaborate with faculty [on the process], not hand it over to them,” said Steve Passin, president, Steve Passin and Associates, Newtown Square, Pa. Mejicano added, “We're the education specialists, not them. They may know everything about this infectious disease from Malaysia, but they don't know anything about education.” A participant countered that such an attitude makes as much sense as having doctors set up a course for educators — how can CME providers presume to know more about the needs surrounding a topic than the key opinion leaders or course directors? Hayes said, “You need to use multiple sources to identify the gap.” If a focus group, interviews, and a survey all show there's a gap, it should help identify the gap for the course director as well. If a needs assessment comes back that's not good enough, send it back, said Mejicano. “We have to go from ‘out of sight, out of mind’ to a ‘front and center’ mind-set.”

Murray also explained how the organizers went about doing the needs assessment for the workshop itself, which entailed interviewing 20 individuals from a broad spectrum of stakeholders. One participant challenged this, questioning the validity of a 20 person sampling. Hayes said, “Large random sampling is a long-standing tradition, but smaller groups of representative people also can be valid.” Qualitative needs assessments can work as well as quantitative ones. “I wish we could have done self-assessments on site, then divided people into separate groups where we could have gotten into depth on specific topics, according to need. But in the real world, we had to determine how much we could do to get the greatest return,” Murray said in a post-workshop interview. Organizers instead shifted with the participants, going into more depth when the questions became more intense, and skimming over areas that people said they already understood, to ensure that their needs were continually being assessed and met.

Outlining Objectives

The needs assessment discussion flowed into another session on how to develop learning objectives based on what you learn through the needs assessment process. It's important to distinguish between goals, which are the final destination, and objectives, which are the journey learners embark upon to get to those goals. Objectives focus on what participants will learn or do as a result of attending an activity, where goals are broad statements of purpose. For example, a goal may be to improve care for patients with diabetes, while the objectives might be performing the specific treatments outlined in the national guidelines. Objectives are the link between the identified need and the desired result and are written to address physician performance. This makes them the key to writing outcomes questions: Did participants achieve the objectives?

To write a good objective, it's important to give a context for the learning (patients with a specific condition, for example), use action verbs that show what it is you're trying to accomplish (gains in cognitive skills, knowledge, skills, action), and specify a measure against which a learner's performance can be judged, such as an X percent compliance with a certain standard of practice. For example, a cognitive objective might be: “Correctly list all of the routine vaccinations for a healthy two-year-old child that are currently recommended by the CDC [Centers for Disease Control].” An affective objective (that is, one that deals with emotional issues) might be: “For your terminally ill patient, defend your position on physician-assisted suicide in an ethics committee meeting.” In the physical skills domain, the objective could for the physician to direct a team through the appropriate Advanced Cardiac Life Support protocol during a simulation of ventricular fibrillation until the mannequin shows a normal cardiac rhythm.

Participants were asked to write two learning objectives for a case study. They were given a handy worksheet that listed verbs to use for the various different types of objectives. When writing a cognitive outcome that would increase a physician's ability to analyze a situation, consider using verbs such as differentiate, classify, compare, and appraise. To describe a knowledge increase in a cognitive outcome, some good verbs might be define, list, recognize, and name. When writing cognitive learning objectives stay away from terms such as know, learn, increase, think critically, expand horizons, grasp the significance of, appreciate, improve, become, grow, approach, and understand. They're too vague and impossible to measure in terms of outcomes.

Choosing Your Format

Once you have the needs assessment and objectives down, it's time to make the link to the learning methods most appropriate for those needs and objectives. Again, the organizers provided several worksheets to link the need to the appropriate learning method to get the expected results. For example, a need to enhance knowledge could be accomplished by didactic CME. For a change in attitude, confidence, or beliefs, peer discussion and casework would more likely lead to the expected result of participants finding new ways of looking at a problem and of measuring success and failure. Workshop participants then were given several needs assessments/objectives and asked to use what they just learned to choose an appropriate learning method and activity to get the desired result.

Reflection also is key to successful CME. Cochrane explained the two types of reflection that can be triggered by educational activities: Reflection that happens immediately, when people begin to apply what they learn to the activity going on around them; and reflection that happens later, when learners think back on what they learned and began to apply it. As Passin noted in a post-workshop interview, “The simple act of getting people to stop and reflect really makes a difference. If you keep on just sitting there, sucking in the knowledge without taking the time to think about what to do with it or what it means to you, it won't go anywhere.” That's why the workshop organizers included reflection time into the workshop sessions and also provided learning contracts that people could use both for immediate reflection, and to look back on later to see what they actually put to use, what they didn't, and why.

Exploring Evaluations

Completing the CME cycle is the evaluation, which then can lead to a new round of needs assessment. The ACCME Element 2.4 requires providers to evaluate the effectiveness of their activities in meeting identified educational needs — but to be effective, there has to be more to it than just providing consistent evaluations and making sure the documentation is in order. “It's really a continuous assessment model,” said Murray. “The needs assessment is an evaluation. Everything you do after that is measured against those needs. And remember that we're focusing on evaluating the impact of the education, not a physician's performance per se.” That means evaluating every piece of the CME pie, from learner needs, to program design and content, to the activities themselves, to the facilitator. “Did I make the right call in choosing that design for those learner needs? Was the content meant to address knowledge, skills, or attitude? Was the activity a lecture or interactive? Was the activity a one-time event, or spread out over time? We often see people skip over this triage and go straight to the outcomes,” said Murray. “But without knowing which aspects worked toward closing those gaps and which didn't, we don't know how well the education actually worked.” Another thing to think about: “If your program was designed for family physicians, but you evaluate at a specialty level, you'll have a problem,” said Murray. As one audience member commented, “It's looking to see if you closed the gap you identified in the needs assessment.”

As for how and when to collect evaluation data, consider that you might have to do a series of measures, starting with the immediate post-activity evaluation and perhaps following up in three months, or six months, or a year, again depending on your needs assessment and objectives. You can collect data in numerous ways, from surveys, interviews, and focus groups, to practice audits, chart reviews, self-assessments, and literature reviews. Also, you can measure outcomes against existing data from hospitals, healthcare systems, medical education companies, and specialty societies. Just make sure the methods you use line up with the type of outcomes you're seeking to measure. For example, if you were looking to change perception or opinion, useful quantitative methods might be participation data and a satisfaction index. For qualitative measures, you can use focus groups and post-activity structured interviews. If you want to measure results in patient care and health status, then infection rates, readmission rates, and morbidity might be the best quantitative measures to use, along with quality-of-life interviews for the qualitative measures.

Once you have the outcome data, what do you do with it? Don't just let the data sit there, said Hayes. Use it internally to justify the costs, share summarized results with stakeholders (including the learners), and submit it for publication in aggregated results form to get additional credibility for your department.

Eliciting Emotion

Offering what they were doing as one example of an outcome measure, the workshop organizers kept a copy of the learning contracts participants filled out, which they intend to use to follow up with participants to see how they are progressing. AXDEV's Murray said, “Just having that reminder might cause them to reflect again.” Two areas that showed up frequently on the contracts were: How do I get others in my environment to understand the new world of CME, and what do I need to do to stay relevant? Participants also wanted to learn how to make the link between the evidence of need and the best way to address it, be it via e-learning, a workshop, a lecture, or a performance-improvement initiative.

Murray notes that those who scored the workshop lower on the immediate post-conference evaluation often had the most in-depth and active items on their learning contracts. Too many providers think the evaluation accurately reflects what attendees learned, says Murray, but evaluations are more likely to reflect how the participant felt about the activity. “You have to remember that what you're measuring in the evaluation is emotion; in the learning contract, you're forcing them to apply what they learned to what they do. That's where you get the more reliable information about the value of a workshop, because it shows they were intellectually engaged and they were planning to do positive things. Sometimes, you have to create disequilibrium to provoke adult learners into questioning and challenging their beliefs, and that makes people uncomfortable, because no one likes to admit they don't know something. That emotional reaction might show up as a negative on the evaluation, but it also likely would be a positive when you look at the actual outcome of what they did with what they learned. That negative reaction is a healthy part of the cognitive process.”

CME's Role in Maintenance of Certification

While in the past physicians only needed to receive board certification once during their careers, under today's evolving maintenance of certification (MOC) process, doctors certified by the American Board of Medical Specialties' 24 member boards need to demonstrate competence for as long as they are in practice. During his keynote address at the Critical Skills for the New CME Paradigm workshop, Sheldon Horowitz, MD, senior vice president, American Board of Medical Specialty Societies, Evanston, Ill., explained why MOC is necessary and what opportunities it offers for CME providers.

The heart of the MOC process is how physicians actually perform in practice, Horowitz said. “It's not enough that physicians know how to treat diabetes, but that they actually are treating patients with diabetes correctly. The whole MOC [system] is a performance improvement model.” While MOC is designed to improve individual physicians, “We have to look at the systems we're in and improve them as well,” he said.

Under the four-part MOC system, physicians must demonstrate that they participate in lifelong learning and self-assessment, increase their cognitive expertise, and undergo continuous assessment of their performance via Web-based education/improvement programs, databases, and practice profiles and surgical logs. While the boards will create the criteria and monitor physicians' progress, most of the continuous assessment activities will be developed and maintained by the specialty societies, as long as the activities meet the boards' criteria, he said.

To fulfill lifelong learning and self-assessment requirements, physicians will need to obtain a certain number of CME credits — an average of 25 to 30 credits per year — and half of those must be specialty specific, he said. Those CME activities can be offered by specialty societies or by other parties that meet the board's criteria. The boards also “intend to move from traditional CME to more interactive activities,” said Horowitz.

One participant asked whether there was a role for nonspecialty society CME providers in the MOC process. “Look at the boards' Web sites to see what they're doing, then see what you can do to help build on what they are doing,” Horowitz suggested. “We need to have fewer silos. We need to work with real-world stuff.”

Docs were initially very unhappy with the move toward the MOC model. “They were mewling and puking when we first started this,” said Horowitz. But now that good educational activities have been developed by organizations including the American Academy of Family Physicians, the American Board of Internal Medicine, the American Academy of Pediatrics, and the American Board of Family Medicine, to name just a few, physicians are starting to give positive feedback.

The next steps, he said, would be for accreditors, certifying boards, and state medical boards to promote CME that leads to improved outcomes, and work together to reduce redundancies in the CME, MOC, relicensure, pay-for-performance, and quality improvement systems. “It's a more complicated and intense form of CME than we're used to, and it will entail linking with other organizations in the system to make it work effectively.”

CME's Next Steps: PI and POC

“We want to get away from education for education's sake,” said speaker George Mejicano, MD, MS, assistant dean and director of CME, associate professor of medicine, University of Wisconsin School of Medicine and Public Health, Madison, at the Critical Skills for the New CME Paradigm conference. “Education is one of the things people do not want to get their money's worth for. They just want to know what they need to know to pass the test. Performance improvement will change that.”

During a panel on the sea changes in CME, Mejicano and the other panelists updated participants about performance improvement and point-of-care CME and their relevance to health outcomes. PI and POC CME are now a part of the American Medical Association's PRA credit system, which is, as panelist Steve Passin, president, Steve Passin and Associates, Newtown Square, Pa., said, “the first initiative not based on seat-time credit. There are five credits associated with each stage, and a bonus of five credits if a physician completes all three stages.”

The three stages of PI are: 1) identify a gap in physicians' performance compared to national standards; 2) include interventions designed to close those gaps; and 3) include a re-assessment to determine if the intervention did in fact close the gap.

The panelists included a checklist for initiating a PI activity that included questions such as: Is there a national standard you can use to establish the best practices against which to measure individual physicians? Are you trying to change attitude, skills, knowledge, or systems? Because PI is still being developed, the CME provider has control over which guidelines or standards to use in the cases where multiple examples exist, said Mejicano. As keynote speaker Sheldon Horowitz, MD, senior vice president, American Board of Medical Specialty Societies, Evanston, Ill., said, “Don't let perfection get in the way of progress.” Passin added that, because this type of initiative “takes a lot more energy than a one-time activity, pick issues that are important to focus on rather than trying to develop PI activities for everything.” Also, to keep it doable, you don't have to do everything yourself. Mejicano suggested picking partners who have resources you don't, so that you can expand into additional areas.

Point-of-Care Pointers

Point-of-care learning, another piece of the new CME, is not separate from practice the way traditional CME is, but is embedded in it. The goal, the panelists said, is to “train physicians to look for information and recognize when they need to know more.” After walking the participants through some examples of Internet-based CME, which allows physicians to earn credit for describing a clinical question, reviewing clinical sources, and evaluating how they applied what they learned to their practices, some doubts remained among participants. How do you know which databases are acceptable to use as source materials? If the physician's initial question isn't focused correctly, or if they just skim the abstract instead of reading the whole article, it actually could increase medical errors, couldn't it? Mejicano answered, “Is it perfect? Absolutely not. Is it a step in the right direction? Absolutely yes.”

Save This Page

For more articles on CME, click here.