For CME providers, the question has moved beyond “Should we do educational outcomes measurement?” We know that we should — and even that we must. Initially, you may be satisfied with self-reported data; increasing numbers of CME providers are now conducting pre- and post-activity tests, as well as follow-up surveys at three-month intervals. With further experimentation, you may want to move to a higher level of outcomes measurement.

Three years ago, the Veritas Institute for Medical Education, Hasbrouck Heights, N.J., used company resources to implement our first educational outcomes measurement (EOM) project. Since then, we have refined our approach and expanded the complexity and the scientific rigor of our EOM. Today, many of the educational grants we receive have a line item budget specifically for outcomes measurement.

Since previous articles in Medical Meetings have focused on the basics of outcomes measurement, we are concentrating on more advanced measurement. Here are some of the lessons we have learned.

Refine Your Research Questions

First, review the needs assessment and learning objectives for the CME activity you have selected. Based on this information, brainstorm and write down research questions. Pick the research question you think is most realistic and refine it. When creating survey content, you may be tempted to squeeze too much into each question. Make your survey questions clear and concise so responders are not confused by an ambiguous or multi-layered query.

Here are examples, developed by Jeff Frimpter, with Veritas Institute.

Example of a Bad Survey Question

Please rate your level of agreement with the following statement: Since participating in this activity, I am better able to screen and treat patients with severe cases of urticaria and it is easier for me to have effective communication with them about their condition.

Strongly Agree 5 4 3 2 1 Strongly Disagree

The above question asks participants to rate their own improvement — and few people, if any, are going to admit, in a professional setting, that they are not better able to perform key tasks in their clinical practice. Also, there are actually three separate questions within the provided statement — be sure to ask participants only one thing at a time. For these reasons, the results from this question would be completely useless.

Furthermore, it is recommended that Likert scales begin with ratings of disagreement and with the lowest numeric ratings as reading from left to right, based on social science survey creation standards. We use even-numbered Likert scales (such as 1 to 6) to push participants who might like to ride the middle of the road to further discriminate.

Example of a Good Survey Question

Please rate your level of agreement with the following statement: Patients with severe cases of urticaria should be considered for management with steroids.

Strongly Disagree 1 2 3 4 5 6 Strongly Agree

This survey question is a concise and direct assertion about patient care in a specific scenario. There is no confusion and multiple items are not packed into the statement. A finely crafted survey question can provide a wealth of information. From a technical standpoint, there is no room for a neutral response, and the scale itself is constructed in concert with a more acceptable standard of research practice.

Consider Control Groups

In addition to serving participants, you may want to add control or comparison groups. For example, if you conduct an immediate pre- and post-activity survey at a live satellite symposium for a large number of specialists, data collected from a pre-activity control group survey of nonparticipant specialists will provide a basis for comparison and research validation.

An ideal control group is a group of individuals matched to key characteristics of the test group such as specialty, practice type, age, and zip code who have not participated in the activity being measured. Most often control groups are established based on specialty and zip code, because the other information can be more difficult and costly to obtain.

Another tool you can try is comparison groups, which can be useful for multi-component activities. Members of this group participate in only some of the components versus those who participate in all of the components. Although the comparison group is not a control, per se, its data may allow you to control for an individual component of the program. For example, 100 CME participants complete both a CME CD-ROM and a CME newsletter. Another 100 participants are divided such that 50 are only given the newsletter and 50 are given the CD-ROM — this would be your comparison group, controlling for each enduring material individually. An additional 100 people who complete the surveys but have no knowledge of the CME CD-ROM or newsletter would be the control group.

Selecting a control group and deciding on an appropriate “N” can be complicated and should be discussed with a consulting statistician. CME providers can purchase a list of randomly selected individuals. This list is commonly selected based on the characteristics of the activity target audience (i.e., specialty).

Alternatively a list that is generated internally from previous activities can be used. However, you must keep in mind that this control group could be biased — it may represent a group of individuals who respond to surveys or may be selected based on the kinds of CME content sponsored by the provider and thus may not represent the group of physicians at large. Therefore, it is optimal to supplement the internal list with one randomly generated by a commercial source and evaluate the survey responses from the two groups for differences. Overall, if you want to make assertions about a particular group of healthcare professionals, you must find ways to ensure that your control group truly represents them.

Make the Most of Your Results

Outcomes data and results, whether positive or negative, are most useful when they become part of a feedback loop for planning future CME activities. In effect, they become the needs assessment for your next activity. Thus, your planning committee members, course directors, and faculty will find the information very helpful.

In addition, commercial supporters and your educational partners will be interested in the results. The most effective tool to communicate the results is a PowerPoint presentation. Commercial supporters especially appreciate receiving a copy of the slides so they can include them in their own internal reports.

Finally, sending outcomes results to participants can also reinforce learning and involve them in reflection concerning their own awareness, perception, and behavior related to the CME activity topic.




Derek T. Dietze, MA, is executive director of CME and Harold I. Magazine, PhD, is president of Veritas Institute for Medical Education Inc., a CME provider in Hasbrouck Heights, N.J. For an outcomes measurement checklist, visit www.Veritasime.com.