MANY PROVIDERS think they don't have the time, money, knowledge, or skills to do outcomes measurement that will determine how effective the activity was in changing participants' behavior and improving patients' health. Testing participants using standardized patients, where actors portray patients with a specific condition for doctors to diagnose, is difficult to do unless you're near a medical school that can provide trained patient surrogates. And chart audits, another gold standard, can be time-consuming, costly, and have potential Health Insurance Portability and Accountability Act ramifications.

A faster, cheaper, less invasive option is using case vignettes, said Lawrence Sherman, executive vice president, business development, Jobson Education Group, New York City; and Linda Casebeer, PhD, associate director, division of CME, University of Alabama School of Medicine — Birmingham, and associate director of Outcomes Inc. The two led a session on CME outcomes at the Alliance for CME's Annual Conference, held January 26 to 29 in San Francisco.

Research has found that physicians don't just give lip service when answering questions about how they would treat patients — their responses to cases are good indications of what they'll actually do in a clinical setting, Sherman asserted. Case vignettes not only can keep CME participants awake and learning during the activity — they also can help providers measure the extent to which the education results in changes in behavior, especially when compared to results for a control group that didn't participate in the activity.

Using a Control Group

Casebeer used as an example the outcomes measurement process she used at a pharma-supported, two-hour satellite symposium at a large medical conference. The goal was to improve the attending infectious disease specialists' management strategy for febrile neutropenia, which occurs when a patient has a fever and a significant reduction in the white blood cells needed to fight infections. After looking at using standardized patients and chart audits, the researchers decided to base their outcomes measurement on case vignettes. “We also asked them about the level of confidence they had in their answers,” she said. In addition, Casebeer sent the cases to a similar population of people who did not attend the session, and compared the two sets of results. Both sets of cases were sent 30 days after the activity.

The results showed that participants were more likely than nonparticipants to treat according to the practice guidelines. They also were significantly more likely to choose the right emipirical therapy for febrile neutropenia and were more likely to feel confident in managing this condition in children.

OK, so she got good results. But what if you do this type of research and the data shows that participants aren't behaving any differently than the control population? Sherman said, “This gives you a way to see what didn't work, then fix it for the next activity.”

How to Get Started

  • Where do you find a list of potential controls? Casebeer suggested sending the survey to the list you originally used to market the activity, minus those who signed up. But it is essential to make sure the demographics of the two groups are generalizable to the overall population of your targeted specialty.

  • How do you get meaningful response rates? Have someone do a power calculation, which computes the sample size required to detect a difference between your participants and controls. Sending out the tests once should yield a 5 percent return rate; if you send them twice, you can expect to get about 10 percent back, Casebeer said.

  • How do we compare the knowledge base of the participants before the activity to the control group? Casebeer suggested doing both a pre- and post-test to get a better idea of where participants are in terms of knowledge and behavior, and whether any changes resulted from the activity.

  • Can you speed up the process? Sure, said Sherman. “Use an audience-response system to get the data right at the meeting.” That way you can measure learning at the beginning and the end of the activity; all that remains is to send the case vignettes to the control group.

  • Who's going to write the cases? Casebeer said, “We recruit faculty who are willing to write cases.” An attendee said her organization gets the course director to do it. The important thing, she said, was to make sure that whoever develops the case, the result represents the best available evidence at the time. Also, Casebeer suggested posing the basic problem in different ways in the cases to keep the learning fresh.

  • How do you follow up: via mail, fax, or electronically? “We ask Alabama physicians what their preferred contact methods are,” said Casebeer. “So we send it to them the way they want.”

  • What if your staff doesn't know about statistics and research design? Casebeer suggested contacting local colleges and universities, where grad students often will do this part of the work for not too much money. But you can do it on your own and generally track the trends without much difficulty.



Have you been successful in measuring the outcomes of your CME activities? Do you have some tips to share? If so, please contact Sue Pelletier, (978) 448-0377, spelletier@primediabusiness.com.