Does it make you crazy when you hear people say that continuing medical education doesn't work? You know that all the blood, sweat, and tears you shed designing educational interventions makes a difference in what your docs do in their offices — it has to! But now there's increasing pressure from everyone from commercial supporters and regulatory agencies to the media and the public to prove that CME actually does change physician behavior and improve patient care. Like it or not, if you're not doing outcomes measurement now, chances are you will be soon.

But, you cry, I don't have the time, the staffing, the funding, the expertise, to measure the outcomes of my CME programs. That doesn't mean you can't do it, though. You just need to be selective, creative, and passionate about measuring the results of your work — and know where to go to get help.

START WITH BABY STEPS

While most CME providers would love to be able to measure the impact of their programs all the way up to the level of improving population health (see box at right), that's probably not a realistic expectation, says Donald Moore Jr., PhD, director, Division of CME at Vanderbilt University School of Medicine in Nashville, Tenn. Even getting to the level of proving patient health improvement as a result of CME may be beyond the realm of possibility for most providers. As Nancy Davis, PhD, director, CME, with the American Academy of Family Physicians in Leawood, Kan., points out, “Even if physicians do their best to learn and do the best practices, there are lots of variables on the patient side that are out of the physician's control.” What is realistic, she and Moore agree, is to measure performance changes and continuous quality improvement in practice.

Begin with something you're already doing — the post-meeting evaluation form — but change the way you do it. Most providers now evaluate outcomes at the satisfaction level (Level 2), says Moore. In all too many cases it comes down to what some call the “happiness index”: Was the food good? Was the room comfortable? How about that Starbucks coffee? While important from a meeting planning point of view, those types of questions don't allow a provider to measure how useful the content was in terms of changing physician behavior. “Most of us are suffering from what I call the ‘enrollment economy.’ We're competing with each other to attract enough registrants to make our numbers,” Moore adds. “That diverts a lot of our attention toward meeting planning, and leaves us little time to look at the important educational outcomes issues.

“An evaluation form should tell us two distinct things doctors learned from this educational intervention, or tell us one thing that they will change in their practice when they leave the hall,” says K.M. Tan, MD, assistant physician-in-chief with Kaiser Permanente Medical Center, Richmond, Calif. “You want to know if there's been a change in knowledge, and if they plan to make changes in their practices. I've always had a distaste for evaluation forms where people do nothing more than run a line down the highest score. You learn nothing from it.”

Davis says that open-ended questions — i.e, what did you learn from this activity that you plan to implement in your practice? — can be more meaningful than a list of possible answers to check off. Adds Mark Schaffer, EdM, vice president of CME, professional postgraduate services, Thomson Healthcare, Secaucus, N.J., “If you're still asking ‘Were all the objectives of this program met?,’ stop. That's not telling you anything. Instead, ask objective by objective. And make sure that at least some of the objectives are measurable.”

To get attendees to fill out the form, you can use a stick (if they don't turn them in, they don't get CME credit), or a carrot, says Elizabeth Sykes, CMP. Sykes, who is director of meeting services with the American Association of Neurological Surgeons, Rolling Meadows, Ill., led a course on evaluations at the Professional Convention Management Association meeting in January. Some incentives she's found to work well to increase the rate of return include drawings for free registration, airline tickets, and hotel for the next meeting or for other programs. Her rate of return increased by 30 percent the first year she offered incentives, and it went up another 10 percent the next year. She also says that participants are more willing to fill out the form if you keep it short — five to seven questions.

FOLLOWING UP

But don't stop there. Tell attendees that you'll be following up with a survey in three or six months. Schaffer says he asks people to address the envelopes themselves. “It saves my staff a bunch of time, and it serves as a self-selection tool, because those who don't want to participate won't fill out the envelope.” Give attendees the option of faxing or mailing their answers, make sure to provide postage-paid envelopes, and keep the form to one page. Follow-up questions should be along the lines of: “75 percent of you said you would change your behavior in X ways. Did you? If not, why not?”

While it can be expensive to survey a large number of participants, you can do a random selection and get a feel for what they're implementing in their practices, says AAFP's Davis. “Since follow-up of all participants is resource-intensive, we're looking at using a small sample to do more in-depth types of evaluations following CME activities,” she says. “With a small sample, one could even do sophisticated studies such as chart audits and patient surveys that might give very valuable feedback, but with less effort and expense than doing a cursory evaluation of everyone.”

To get people to participate, ask them if they'd be willing to be part of a study. “My experience has been that that's incentive enough,” says Schaffer. “If they agree to participate, they will, especially if you share the results with them.”

Davis says that while self-reported data may not be tremendously accurate, “at least it gets attendees thinking about the topic again. It keeps it on the front burner.” And when you find out why they didn't implement the change they said they would, it gives you information on what other kinds of interventions may be needed. Schaffer says, “If a number of respondents say they need more information, you could send a new journal article and tell them you'll be following up again in another couple of months.” That's one way to ease into multiple interventions — which studies show are more effective in changing physician behavior than one-time activities — and to show attendees that you really are interested in what they're doing. You also can kick it up a notch by sending case studies, which according to numerous studies, are as reliable a measure of physician behavior change as chart audits, though they don't reflect what a physician actually does in practice as well as standardized patients measurements. (See Measurement Tools, page 34.)

But whatever method you use, start small. “Don't make the mistake I did when I started doing this work,” says Schaffer, who started out with grand ambitions of having all his teams doing outcomes for all their activities — and having it all be funded by grants. “Don't try to change everything. You'll just get frustrated. Start with one activity. If I were a small CME provider or medical school, I'd be satisfied if I could do follow-up on one activity the first year, maybe two the second year,” he says. Then let people know what you've done and the kinds of information you have, and watch it grow.

“Even if you're disappointed at the way they initially come out, self-report measurements are a good start,” Moore adds. “It has an infectious quality — a light will go on for your course directors and the physicians you're working with that this is what CME is all about.”

DON'T FLY SOLO — FIND A CO-PILOT

“Remember, you don't have to do it all yourself,” says Robert Kristofco, director, Division of CME, University of Alabama School of Medicine in Birmingham. He counts himself lucky to be working with Linda Casebeer, PhD, University of Alabama School of Medicine's associate director, division of CME, and an expert in outcomes measurement. “I knew where I wanted to go, but now we're doing research in ways I never could have foreseen,” he says. If you don't have an expert on staff, look at the other arms of your organization for potential partners.

“Partner with anybody who's doing healthcare outcomes in your state, your county, or your own institution,” suggests Harry Gallis, MD, vice president for regional education with Carolinas HealthCare System, and director, Charlotte Area Health Education Centers, Charlotte, N.C. He works with his hospital's performance improvement committee to pinpoint patient care indicators. The CME department at Kaiser Permanente Medical Center in Richmond, Calif., also partners with its quality improvement and data-tracking arms to measure improvements.

Sharp Healthcare in San Diego, Calif., expands its reach with a software tool called MedAI, for Medical Artificial Intelligence. According to Howard Robin, MD, Sharp's medical director, CME, in San Diego, his is one of 250 healthcare systems participating in a pooled database that includes acuity-adjusted data on physicians' use of laboratory, X-ray, pharmacy, and other areas. “We can compare different doctors at one hospital, compare our five hospitals with each other, and compare our individual hospitals and health systems to the other 249 health systems that participate in the MedAI database,” he says. “We can look at individual physicians, design educational interventions, and track results.”

If you don't have access to a data-tracking department internally, find out who in your environment might have the data you need, find out if their goals are consistent with yours, and come together to plan a CME activity that includes outcomes measurement. Moore suggests developing partnerships with your state's quality foundation. Kristofco says his office currently is working with the Alabama Quality Assurance Foundation, that state's peer review organization, to develop interventions that improve outpatient care performance for diabetes practitioners. Since the AQAF has access to Medicare data, “we don't have to spend a lot of money doing chart audits and other measures,” says Kristofco.

And you can go further afield. For example, Kristofco's group partnered with Aetna Insurance Co. on a program to measure the outcomes of a chlamydia intervention, using the insurance company's data to feed back into the physician population. “Aetna's interested because it's to their benefit to improve screening rates and reduce misdiagnosis levels,” he says. “It's a matter of knowing whom to go to, and the politics involved in getting something done.”

While pharma companies have data-tracking resources they can make available to providers, and even statisticians who could help you decipher the data, they probably aren't the best resource for help in outcomes measurement because of their vested interests in tracking their own products. However, Moore has worked with some companies that were willing to share their own data about prescription behaviors. “They were interested in looking at changes in prescribing habits across the board, not so much just their own product. A large proportion of reps I've dealt with have been above the board when we get to this level. Don't let your guard down, though. Be careful about whom you deal with and how you deal with them.”

BUT WHO'S GOING TO PAY FOR IT?

Your first impulse may be to turn to commercial supporters for outcomes research funding, but you may find yourself walking a fine ethical line if you do, says Moore. “Commercial supporters also are interested in behavior change — they want physicians to begin to prescribe their drug instead of their competitors'. So immediately you get into that ultimate CME ethical issue: If the goal of a CME activity is to change behavior, does that mean changing behavior to increased prescriptions of the supporter's product, or doing what's best for the patient? Of course it's the latter, but it gets a little tricky.”

Adds Davis, “Frankly, the pharmaceutical companies don't seem to be interested in helping us measure the kind of outcomes we're looking for. They're interested in measuring prescription numbers, which they already do on their own.” Says Schaffer, “The problem is that the money resides in their marketing departments, and outcomes studies don't seem to translate to what they think marketing research is. Their idea of tracking still is: How many people did we get? Providers may be trying to measure beyond just seats in seats, but they meet a lot of resistance from supporters, especially their local reps, who think the more people they get the message to, the better the impact, even though we know that smaller, more interactive groups lead to better outcomes.”

Kristofco, who also is involved in a commercial CME outcomes measurement enterprise, has worked with several pharma companies that are interested in comparing the effectiveness of different kinds of educational interventions. But, while they're interested in this kind of output, “they're still a little reticent to support it financially,” he says, because of today's critical environment. “Some in industry are concerned that it will look like pharma's using outcomes measurement to show how they can influence education.” The logical solution is for CME providers to build outcomes measurement into the program's overall budget, and place a priority on it so it's not the first thing to cut when money gets tight.

BEYOND PHARMA FUNDING

But there's also a broad spectrum of support options beyond pharma companies, depending on your interests and your ability to develop proposals that are appealing to someone, says Kristofco. Look for other entities that may have a stake in measuring improvements in your topic areas. While it may take some time and effort to ferret them out, there are lots of small foundations out there that may be interested in your efforts.

Says Schaffer, “If you're doing a program on Alzheimer's, it would be in the interest of an Alzheimer's foundation to see if what you're doing is having some effect. If you went to them with a good, structured proposal, I think you'd have a good shot at getting some funding.”

The federal government also is interested in learning how to improve aspects of the healthcare system, including the dissemination and adoption of guidelines, and quality improvement. The National Institutes of Health, the Agency for Healthcare Research and Quality, and the National Heart, Lung, and Blood Institute are among the many government agencies that have provided grants for healthcare outcomes measurement studies. Kristofco says that these days there's also a fair amount of money from government sources available for measuring the impact of bioterrorism education.

GET ON BOARD — THE TRAIN'S LEAVING

The time to get started is now. The push toward outcomes measurements, as well as practice- and evidence-based medicine, won't cohabitate happily with the old “seats in seats” model of CME. A few examples: The American Medical Association recently took the word “hours” out of its credit statement, and AAFP is considering doing the same for its statement. “We may not have figured out how it's going to work yet, but we're working toward a whole new metric for CME,” says Davis. “Doing point-of-care learning, individual-focused activities, and quality improvement projects in practice could add up to thousands of hours, so the old seat-time measurement isn't going to work anymore.”

“We're just beginning to see the movement to integrate CME, quality improvement, and evidence-based medicine,” Davis continues. While some managed care organizations already are measuring quality improvements in practice, they're not yet linking it to CME. “If we can take the best of what the managed care community is doing in quality improvement and link that into educational interventions using evidence-based clinical medicine, that's going to be the key to really improving outcomes,” she says. Adds Kristofco, “Outcomes is a way to begin to cross the chasm between where some of us are and where we would like to be.”

CME Outcome Levels

Participation — how many attended?
Satisfaction — did they like it?Performance — did behavior change?
Patient health — did it improve?
Population health — did it improve?

Source: Don Moore, adapted from Kirkpatrick, 1998; Walsh, 1984, and Dixon, 1977

Measurement Tools

Here's a look at just a few of the many ways to measure CME outcomes:

SELF-REPORT THROUGH EVALUATIONS: Asking attendees what they learned and how they planned to (or how they did) use what they learned through immediate, post-meeting evaluation forms and follow-up mailings, e-mailings, faxes, and telephone interviews

Pros: Easy to implement, relatively inexpensive

Cons: Not very reliable, can be difficult to get a significant number of responses

CASE STUDIES: Presenting attendees with a case study related to a specific practice area, both as a pre-test and as a post-meeting evaluation. Can be done via telephone, fax, e-mail, or mail

Pros: While still a form of self-report, studies have found it to be reliable in terms of predicting physician behavior; can be as cost-effective as evaluations

Cons: Need to have expertise to design an effective case study; can be difficult to get responses

CHARTS/PATIENT CARE RECORDS: Measuring baseline performance and post-meeting behavioral improvement by looking at attendees' patient care records

Pros: Highly effective form of evaluation, especially when the records are available in database form

Cons: Privacy issues can be an impediment; can be difficult to obtain outside of hospitals and large healthcare systems

STANDARDIZED PATIENTS: Objective, structured clinical exams where physicians visit stations and examine patients presenting a particular disease. Docs have to come up with the right answer before they can move on to the next station.

Pros: Highly effective form of evaluation; allows CME provider to observe physician interacting with actors posing as patients

Cons: Requires a lot of time and resources to develop and implement

What Your Peers Are Doing

Looking for examples of what some of today's leaders are doing in terms of outcomes measurement? The Alliance for Continuing Medical Education publishes a compendium of exemplary practices, based on data from the Accreditation Council for Continuing Medical Education. For more information, contact the Alliance at (205) 824-1355; e-mail acme@acme-assn.org, or visit www.acme-assn.org.

Call for Best Practices

If you build it, they'll probably come — but will they learn anything worth measuring? “If you're running a traditional CME program that's kind of generic, lecture-based, and altogether nondescript, your odds of changing behavior are slim, based on what the research has to say,” says Robert Kristofco, director, Division of CME, University of Alabama School of Medicine in Birmingham.

That's why it's important to design a program on the front end for back-end impact. “That's the part of outcomes no one talks about. Anyone can measure them, but the literature tells you how it's going to come out if you use traditional methods,” he adds. And who wants to measure that? As Mark Schaffer, EdM, vice president of CME, professional postgraduate services, Thomson Healthcare, Secaucus, N.J., says, “Would it be scary for providers if outcomes measurements showed that their CME is not having any effect? Absolutely.”

In the June issue, we'll explore ways to design an educational intervention that will improve your chances of getting positive, measurable results. If you have examples of successful outcomes initiatives, contact Executive Editor Sue Pelletier at (978) 448-0377, or e-mail her at spelletier@primediabusiness.com.