The push is on to determine if the copious dollars expended to fund CME programs nationally actually translate to better patient care. The buzz phrase in the CME world these days is “outcomes measurement,” i.e., the role continuing medical education plays in improving patient outcomes — if any.

Some naysayers argue that it's impossible to derive any direct outcomes measurement from CME programs. They insist that there are too many variables along the way to accurately predict whether an educational program has affected a physician's practice, which in turn affects patient outcomes. And they aren't wrong — at least in terms of the way CME activities are currently conducted. That isn't to say that outcomes measurement of CME activities is impossible. It is possible. However, continuing medical education providers will need to change the way they think about CME in some fundamental — and probably initially uncomfortable — ways.

If you want real outcomes measurement for your programs, read on. Outlined below are 10 steps to successful, real-world application of outcomes measurement. These principles are applicable to hospital CME programs, but could be employed by large-scale organizations if they act as “thought leaders” and set larger (state or national) goals, and if they work with professional performance improvement organizations.

  1. Decide what you're measuring.

    If you can't articulate what you need to improve, how can you measure it? Talk to your Performance Improvement (PI) team to see what has been quantitatively shown to need improvement in your facility. Keep abreast of the current literature using the resources of your medical librarian. Target areas of medicine where there exists both solid research and professional consensus. Use current standards and national guidelines to direct your efforts.

  2. Set goals and objectives.

    Good PI people will be able to help you set a statistically measurable goal, and judge how long it will take to meet the goal, such as: “In the next 12 months, we will increase by 5 percent the number of physicians who give antibiotics appropriately after knee surgery based on the XYZ national guidelines.”

    Some potential objectives for our example might be:

    • Review evidence-based medicine for antibiotic prophylaxis in knee surgery.

    • Outline specific recommended practices for administering antibiotics after knee surgery.

    Given these two objectives, after reviewing evidence-based medicine, you can provide specific studies you want discussed and outcomes from each.

    Under specific recommended practices, you can detail how you want your physicians to actually practice medicine at your facility. The preceding statement is bound to raise some eyebrows at the very least, and may provoke consternation and even outrage among CME educators and physicians alike. The concept of dictating practice is a departure from most current CME programs, in which a topic is discussed and the participants are left to themselves to form an opinion about how to interpret and integrate the information provided.

    There's certainly room in medicine for opinion, but when it's known and widely accepted that there's an evidence-based treatment regimen available for a specific and defined problem, physicians should bow to the evidence. They should leave opinion and conjecture out of the equation. Anything less is unscientific, and anything unscientific may be construed as questionable practice, if not outright quackery.

  3. Identify an acceptable performance level.

    Some doctors will already meet or exceed the standard being set. Some may lag far behind. Performance Improvement software can help you identify your “star performers,” that is, those who excel, as well as your underperformers. More on star performers in No. 8.

    Use projections to determine what each underperforming individual's target score will need to be to make the facility's overall goal. For example, if your current rate on some measure is 70 percent and you're aiming for 75 percent, you'll need to determine if you want to raise all underperforming doctors' scores by 5 percent, or if you want to set a minimal level of performance and expect all to come up to it — for example, “All physicians will administer antibiotics correctly 75 percent of the time.” Or you may want to employ a sliding scale of improvement, such as “Those scoring at 40 percent will need to improve by at least 20 percent in the first quarter, but those scoring at 60 percent will need to improve by only 10 percent.”

    Realize that if you set the bar too high, your physicians may not be able to meet your goals. Therefore, make several calculations and projections to determine which method is most realistic and attainable.

  4. Groom a speaker to address your goals.

    This isn't the time for a speaker who breezes in with a “canned” presentation and barely pays lip service to salient objectives you want covered. Everyone's experienced “The Expert,” who shows up with a slide show bearing a different program title from the one you and he mutually agreed upon in advance.

    He then proceeds to “do his own thing,” and fails to cover even one of your objectives. Every indication clearly shows that he's given the exact same speech a hundred times before, and he “hat tipped” to your program in order to get a speaking engagement from you, but he changed his presentation not one iota from its original form once agreeing to present.

    You must review the speaker's presentation ahead of time to ensure that it thoroughly covers all the objectives you've identified. Also, make sure all of the presentation can be accomplished in the time allotted. One rule of thumb — a speaker can usually comfortably cover 40 to 50 slides in a one-hour presentation.

    A “best case” scenario would be to view the presentation in its entirety before the lecture. This could be done in person, or through videotaping. I advocate for videotaping. For more on this, see item No. 8.

  5. Mandate underperformers' attendance.

    Talk of requiring physicians to attend educational sessions brings groans of protest from many quarters. Doctors have enough regulations and requirements already — why add to their burden? I disagree. Improving medical care isn't a burden. It's an ethical and professional responsibility.

    Your CME department has at its fingertips — compliments of your PI department's Performance Improvement software — the ability to identify individual physicians who don't meet your institution's goals. Target those physicians. Require underperformers to attend CME sessions aimed at improving scores on specific measures. Exempt those physicians who already meet or exceed your standards.

    Underperforming physicians should welcome opportunities to better their practice of medicine, especially in areas where they've been objectively shown to need improvement. If approached in a meaningful way, they'll embrace methods aimed at improving their patient outcomes. After all, it's what everyone strives for when they practice medicine.

  6. Eliminate “butt -in-the-seat” credit.

    Some will argue that professionals needn't be subject to testing methods to determine competence after an educational activity. However, if you don't employ a post-test, how do you know if your students have absorbed what's been taught?

    The concept of outcomes measurement is all about number crunching, and if you don't have any numbers to crunch from your students, how can you make any quantifiable statement about the effectiveness of your educational program?

    Some think that program evaluations serve this purpose. However, program evaluations are a subjective means of program assessment. Post-test scores are an objective means. Subjective measures target what the learner perceives that he or she has gotten from the educational experience, which may be a far cry from what he or she can actually implement in practice. Objective measures require students to demonstrate their knowledge.

    A post-test score gives tangible evidence of competence — or lack thereof. It raises the bar and establishes a set of performance expectations. In short, people pay more attention when they know they're going to be evaluated.

    Just because someone is sitting in a room full of physicians while an activity is going on does not guarantee that that person is paying attention. The vast majority of physicians are conscientious people who strive to get as much as they can from CME programs. But everyone has an off day now and again, and sometimes material is just plain hard to grasp. To save face in a room full of peers, professionals often don a mask of confidence but may secretly feel bewildered and uncertain about what they've just seen and heard.

    Unfortunately, there are inattentive attendees in every group. They can grab a quick cup of coffee and kibitz with colleagues in a far corner of the room, take in a smattering of information, and then successfully fill out a program evaluation to receive an hour of accredited educational credit. A handful of physicians are just there for the chitchat, the free food, and the Category 1 credit. Program content retention is optional.

    The bottom line: lose “butt-in-the-seat” credit. It has no place in a quantifiable environment.

    For those who think what I'm suggesting is excessively draconian, PI professionals would be likely to advocate for a pre-test and a post-test, and insist that no reliable measure of educational improvement can be determined without both components: a pre-test as a starting point, and a post-test as an end point for measurement of a specific learning behavior during a set period of time.

  7. Re-educate physicians who fail.

    If you apply a post-test to an activity, almost inevitably some people will fail. The goal of all good educational programs is to ensure that the greatest number of people can master the content presented, therefore your aim is to do all you can to see that your people get the assistance they need to pass.

    The best educators know that their students don't fail them; they fail their students by neglecting to effectively convey information. One idea is to videotape the presentation. The beauty of video is that it can be stopped, rewound, and reviewed as many times as is necessary to cover a point. In addition, digital video can also be uploaded to the Internet.

    Videotaping serves two important functions. It allows educators to reach physicians who, for legitimate reasons, can't attend a sit-down CME lecture. Also, it can be used as re-education for anyone who fails a post-test.

    Most often, a review of educational material is all that's needed to boost post-test scores, and full and complete review of the presented material can be accomplished through watching a videotape.

    More on those who still fail to pass despite video review in No. 8.

  8. Use mentoring.

    There are legitimate reasons why physicians fail post-tests and don't meet performance goals. Medicine is complex and sometimes confusing. Either test results, or individuals' scores on your performance goals will help you identify the physician who is floundering and needs help. Watching a repeat of a video may not suffice for these individuals.

    In such cases, whenever possible, pair underperformers with identified “star performers.” Ideally, let them round together so the struggling physician can see firsthand how to effectively approach the challenges of the standard. But if rounding isn't possible, at least have them communicate through meetings or even telephone calls or e-mails. Mentoring is a positive way of addressing continued failure to grasp material. It's interactive, flexible, and nonpunitive. It's about doctors consulting with doctors, a time-honored medical tradition.

  9. Investigate failure to meet performance goals.

    If a physician is able to pass a post-test successfully and his performance scores still fall below acceptable levels during the defined measurement period after the educational activity, an investigation is called for.

    The goal of the investigation is to discover the reason for the continued underperformance, whether or not it's related to practice, behavior, or unforeseen circumstances.

  10. Address the cause of the failure.

    Use outside consultants to conduct an investigation. Outsiders are removed from internal politics, don't personally know the players involved, and are likely to be more objective and fair-minded. Use their skills to determine if the fault lies with:

    • the MD's practice of medicine, that is, how she actually performs in the field vs. what she knows “on paper,”

    • behavioral resistance to change, or

    • other factor(s) unrelated to practice and beyond the scope of the physician to rectify.



Putting ideas into practice can be difficult. It's possible to pass a written exam and either not retain or be unable to apply the information. Address practice issues by pairing strong physician performers with those in need of improvement during rounds, as described above.

What about those who successfully pass educational testing but still fail to come up to your established standard? Educational testing scores can help get at the real cause of underperformance when it's of a behavioral nature. If a physician has shown that he understands the educational material by passing a post-test, and it's not a matter of failure to retain or apply learned material, the reason for not meeting performance goals may have shifted away from the realm of medical education to that of psychology.

Talk to your physicians. Some may stubbornly cling to old practice patterns despite all objective evidence to the contrary. They've heard every word your speaker said and understand her fully, but they simply don't agree with her.

Allow physicians who object to your standards to bring forward for consideration any evidence they may have in support of their alternative viewpoints. If their evidence doesn't persuade you to alter your standards, address behavioral issues with appropriate intervention, such as counseling, mediation, or disciplinary action.

In some instances, a physician's failure to meet a standard can stem from many causes that are completely unrelated to her practice of medicine. Sometimes, bad things do happen to good people. For example, a physician may have the misfortune of encountering a higher-than-average number of very ill patients, which can cause performance scores to appear lower than expected. Or someone on the allied health team may fall short, and his or her performance negatively impacts the physician's scores.

Again, outside consultants can help identify such instances. In cases where a physician has been determined to be without fault, risk management departments can employ root-cause analysis to determine the real cause of the failure and formulate possible systematic solutions to the problem.




Special thanks to Erin Donovan, director, quality and risk, Performance Improvement Department, Lowell General Hospital, Lowell, Mass.

Donna L. Beales, MLIS, is librarian/CME coordinator, at Lowell General Hospital, Lowell, Mass. The opinions expressed in this article are the opinions of the author, and do not necessarily reflect those of her employer.

WANTED: Best Practices

If you've developed successful initiatives for outcomes measurement, needs assessment, implementing the updated Standards for Commercial Support, or other aspects of CME, we'd like to hear about them. Please contact Editor Tamar Hosansky at (978) 466-6358, or send e-mail to thosansky@primediabusiness.com.