Some attendees hailed a session called Practical Strategies for Better Outcomes, held at the Alliance for Continuing Medical Education Annual Conference in San Francisco in January, as the best session on outcomes they had ever attended. Presenters Carol Havens, MD, director of clinical education with Kaiser Permanente in Oakland, Calif., and Philip Bellman, MPH, a training and development consultant also with Kaiser Permanente of Northern California, took session participants through what needs to be done, soup to nuts, to achieve better outcomes for CME activities. They used real-life examples from activities Kaiser has measured. They even included a post-program assessment example that participants could take home and use as a basis for their own activities. Here are the highlights.

Move the Big Dots

The presenters explained why good outcomes begin with good needs assessment — if you don't know where you're going, how do you know when you've arrived? If you don't know where you started, how will you know if you've gone anywhere? You also need to link objectives to outcomes, use multiple interventions, measure outcomes in multiple ways over time, and use outcomes to identify future needs.

Outcomes measurement leads to better education, demonstrates the value of the CME office, and helps determine future education. And, they added, positive outcomes can lead to exemplary accreditation status with the Accreditation Council for CME.

Conducting outcomes measurements and thus developing more effective education, can also help CME “move the big dots,” Havens said, including improving healthcare quality, patient safety, decreasing mortality rates, and creating positive change in a host of other high-profile population health areas.

Level 1 Limits

Citing examples from their own experience, Havens and Bellman explained the differences between the five levels of outcomes measurement. (See sidebar, opposite.) When they used a Level 1 outcomes measurement to assess the effect of a monthly one-hour videoconference, more than 90 percent of participants rated the program's quality as “good” or “excellent,” and a similar number said the activity was useful. Good results, right? But when they measured Level 2 outcomes (changes in knowledge, attitudes, or skills that reflect an intent to change) for the same program, only 48 percent to 74 percent said they either would change or were considering changing. Fifteen percent to 31 percent said they already use the behavior covered in the activity in their practice, and 3 percent to 23 percent said it didn't apply to what they do. (Percentages vary within categories because the researchers included a number of measures, including pre- and post-tests, skill observation, and commitment-to-change measures.) They also asked those who said they were going to change to list two things they intended to do differently as a result of the program.

For Level 1, “You'll usually get a 3.5 on a 4-point scale, and a 4.5 on a 5-point scale,” said Bellman. “I can pretty much predict exactly the results you'll get before the activity even takes place.” So, that's not very useful. While level 1 “usefulness” ratings are somewhat helpful, they aren't predictive of what people will actually put into practice, they said. A Level 2 question that measures the intent to change, though, is both useful and predictive, since intent to change correlates highly with actual change, Bellman said. While the Level 2 results were both useful and predictive, there also was a much greater variation in the responses, they noted.

Getting Real Results

Level 3 (self-reported behavior change) involves a follow-up assessment of implemented practice change. This level of measurement provides both intended and unintended consequences of CME, and can document the impact of CME on practice behavior, though it does tend to be subjective since it's self-reported. “We ask physicians if they changed something, and what they changed,” said Bellman. This can be done via Web or mail surveys, and phone interviews. Level 3 questions also can be reported at subsequent activities.

For one CME activity on managing obesity, 54 percent had said they intended to make a change in their practice (Level 2). After one month, 45 percent said they measured body mass indexes “more frequently,” — the Level 3 measure.

Level 4 kicks it up a notch by objectively tracking change in practice using measures including quality, utilization, and patient satisfaction measures; guidelines from sources including the Health Plan Employer Data and Information Set, Joint Commission on Accreditation of Healthcare Organizations, and National Committee for Quality Assurance; screening and diagnostic rates; and community public health data. While Level 4 measures can help assess needs and chart post-activity progress, they may not capture the breadth or complexity of new behaviors, and it can be hard to pick out individual data from that of a large practice group, they said. However, Level 4 can measure things like increased chlamydia screening and appropriate prescribing of asthma medication, for example.

When they tracked outcomes from an in-person and Web-based CME intervention to improve patient-provider communication and increase the use of electronic patient care records, the presenters found that 86 percent of participants said they had changed their practice as a result (Level 3). In the Level 4 measures, they found that electronic charting had increased 92 percent, electronic prescribing had gone up 55 percent, and patient use of MD Web pages went up 100 percent.

The time frame for measuring change also is important, Havens and Bellman noted. Levels 1 and 2 are immediate measures, Level 3 outcomes assess change one to three months post-activity, and Levels 4 and 5 measure impact six to 12 months later, they said.

Going for the Gold Standard

Level 5 outcomes objectively measure change in treatment results or population health status. On the downside, it also can be hard to measure or obscured by co-morbidity or other factors, the presenters said.

Using sources like morbidity and mortality rates, incidence of secondary complications, and hospitalization and re-hospitalization rates, Level 5 measures can track things like a decreased risk of cardiac death, increased survival of HIV patients, and a decrease in smoking rates. You can use Level 5 outcomes to summarize change for key stakeholders, examine the intended and unintended effects of the activities, and use the data to assess the need for future activities. “Don't just document [change] and put [the results] in the CME files,” they warned. “Find out who the outcome data is important to, and let them know what the data is.”

They measured Level 5 outcomes from a CME program to improve hypertension control, which involved multiple interventions, including a regional videoconference. Right after the videoconference, 64 percent of participants said they intended to make some changes (Level 2); after six months, the objective data showed that hypertension control rates had increased 23 percent.

In another case that used live CME to improve the appropriate writing of haloperidol orders for delirium and agitation, they found that 85 percent said they intended to change. A six-month chart and pharmacy data review showed an 81 percent increase in compliance. In another case, an in-person CME activity aimed at increasing the early diagnosis of domestic violence and increasing intervention and prevention resulted in a three-month-post-program increase of 5.4 percent in domestic-violence diagnosis. The Level 2 measure showed that 80 percent intended to increase their use of domestic-violence screening tools.

The presenters recognized that doing high-level measurements for all programs just isn't practical, or even possible, for most providers. “We'd like all our programs to be Level 3 or above, but we do the best we can,” they said. To find the time and resources to do outcomes measurement, prioritize your resources, and use the measures on high-impact programs that address organizational needs. They even suggested that the way CME providers have been proving their worth by holding more and more activities isn't working to their benefit. Do fewer programs, and make sure the ones you do are high-impact, they suggested. “If you only do 10 [CME activities] a year, but they have a big impact on patient health, you'll prove your worth.”

Survey Stumbling Blocks

And there are some barriers to leap, they acknowledged. A big one is that docs don't tend to respond very well to surveys. The presenters had a few suggestions to increase response rates. While Kaiser does both electronic and paper surveys, they advised not relying on electronic surveys because most docs still don't respond very well to them. Don't survey every program, either, just the ones that matter, so your docs don't get survey fatigue. Also, when they come to the next activity and are a captive audience, ask what they did with the last month's information. Another tip: Ask your physicians to send the request for information, because other docs are more likely to respond to someone they know. Keep the survey short, too. “It shouldn't take more than two to five minutes,” Havens said, and test it on colleagues first to make sure the wording is clear. Publicize the data, too, so physicians know what they're doing matters, which creates an incentive to respond.

One interesting tip was the “lumpy envelope” one: Put a piece of candy in the envelope along with the survey; doctors will open it just to see what the lump is. Once they open it, they're somewhat engaged with it, and you have a better shot at getting a response.

One audience member pointed out that it's hard to link outcomes to a specific educational intervention when there are so many other factors that may have led to the behavior change. The presenters acknowledged that this is a problem, and it's hard to know the impact of each intervention. But, since what you really care about is improved patient outcomes, it doesn't really matter all that much. “If you can identify that an educational intervention is needed, you provide that intervention, and change happens, then you can rightfully say you were at least part of the solution,” said Havens.

The Five Levels of Outcomes

LEVEL 1 outcomes, or the “smile sheet,” rates the CME activity's quality, usefulness, objectives, presentation, faculty, and even the coffee.

LEVEL 2 measures a change in participants' knowledge, skills, or attitude — an intention to change.

LEVEL 3 is a self-reported change in clinician behavior or practice.

LEVEL 4 is an objectively measured change in clinician behavior or practice.

LEVEL 5 is an objectively measured change in patient health status.