Question: Our continuing medical education department wants to do a better job of evaluating how educationally effective our activities are. We use participant evaluation data to make this determination—do you know of other ways to do this? How do you evaluate an activity’s effectiveness?

This is something most CME professionals I speak with have wrestled with, because we all want to ensure that the education we provide is really getting across to our learners.
There are two major categories of evaluation data: quantitative and qualitative. Essentially, quantitative evaluations consist of rating scales, comparison of scores before and after the educational intervention, and test results. The quantitative technique is by far the one most commonly used by CME providers. There are two reasons for its popularity: it’s black and white (because numbers don’t lie), and it’s easier than other measures.

While qualitative measures are slightly more difficult to obtain, they do a good job of subjectively summarizing educational benefits, and in fact were once much more in favor than they are now. While they aren’t as prevalent today, we still see one remnant qualitative question commonly being used: “Did the education you received through this activity have an impact on the way you practice, and if so how?”

The biggest advantage of using qualitative measures to evaluate is that is gives a sense of how participants actually viewed the activity. On the other hand, because it’s human nature to tell others what they want to hear, participants may give a false summary of the educational benefit they received.

Mix It Up
Our program uses a mix of both quantitative and qualitative measures to get a good sense of the educational benefits we’re providing. We use statistical software in our performance-improvement CME activities to give us insight into what participants knew and how they practiced before our educational interventions. Statisticians are able to score open-ended answers to fit their parameters in addition to having simple quantifiable answers to use in their evaluations. For example, yes-no questions can be scored as Yes=1 and No=2, allowing us to remove some of the subjectivity and bias from our data and provide the truest look at the educational benefit. It also provides the user with p-values and other statistical metrics.

Not every CME program needs to use statistical software, however—the same methods of data evaluation can be obtained through more traditional scoring and statistical techniques. While this does take more time and effort, in the end it is definitely worth it. Outsourcing your statistical needs, which often can be done inexpensively, is also a viable option these days.

Upping the Return Rate

Many programs send out post-activity surveys at three and six months to see if any of the learning “pearls” were incorporated into practice. One of the biggest problems with doing this is a low rate of return for completed surveys. This is where the age-old art of nonelectronic communication can come into play. Many program coordinators have had success just picking up the phone and asking participants the same questions that they were asked in the electronic survey. The Accreditation Council for CME may view this as positive because it shows increased interaction between the teacher and the learner, and you will collect the evaluation data you so need. If you just focus on the numbers, you end up ignoring a tremendous amount of data that may provide key information.

One bit of advice: Don’t limit the types of data you collect because it is difficult to collate. Many outcomes reports these days have both quantitative and qualitative results sections. By taking some time and mining the data you collected, you may find that there are useful tidbits that can be used to evaluate not only your activity but your program as a whole.

Rick Kennison, DPM, MBA, CCMEP, has been president and general manager of the PeerPoint Medical Education Institute since 2006. He also is a vocal advocate for
improving the CME industry as a whole, and he has
presented at the Alliance for CME and the ACCME.
E-mail him at rick.kennison@peerpt.com.

More of Rick's Columns:

How to Engage Learners in CME

The Fizzling of PI-CME?