THE RECENT FOCUS on outcomes research and evaluation has created a dilemma among CME providers and others involved in the process. On the one hand, most would agree that the use of electronic and traditional techniques to determine whether CME activities have changed healthcare practitioners' diagnostic, management, or treatment behaviors — with the goal of improving patient care — is central to the learning experience. The benefits would seem to be well worth the effort. Information gained in the process can be used to improve future events, to identify instructional needs and deficiencies, to strengthen content andperformance, to acknowledge peer-to-peer and regional differences, to suggest the need for advice, counsel, and behavior change, to promulgate use and adherence to evidence-based medicine and practice guidelines, and to evaluate the effectiveness of invested funding.
On the other hand, some in the pharmaceutical industry seem reluctant to support outcome evaluations at the required levels. During early program discussions, outcomes typically are recognized as required components of CME planning, implementation, and improvement, and it is agreed they are needed to justify present and future program funding. Outcomes data are also used to validate program independence and balance as well as company compliance with current guidelines.
As CME planning progresses, however, outcomes evaluations often lose support among funders, particularly when they recognize the added costs. The immediate result is the use of substandard or inadequate evaluation instruments. In some cases, supporters prefer to use more traditional outcomes questionnaires that accomplish little more than determining whether the meeting room was comfortable.
Make Outcomes Cost-Effective
If cost is a determining factor, providers, supporters, and vendors who offer measurement systems and analyses should collaborate early in the event-planning process to assure that measurement tools are limited to specific “need to know” elements of the CME program.
Computer-assisted outcomes methodology presents almost endless possibilities, and the temptation is to believe that more is better. Mega-analysis is not necessarily a good thing. At the outset, determinations should be made on the type and quantity of outcomes analyses best suited to the activity's learning objectives and behavior change goals.
It also is prudent early in the process of crafting the appropriate model for outcomes measurement to agree on the specific uses to be made of the data obtained. Often data collection is too comprehensive and sophisticated and exceeds measurement requirements, and while the provider may make use of much of it, supporters often do not. The idea is to use outcomes data to improve the CME learning experience and ultimately, patient care. Data housed in unopened computer files or dusty file cabinets serve no purpose. Rather, outcomes data should be a rich source of information for all involved in a particular CME enterprise.
Collaboration among all parties will produce a cost-acceptable measurement, one that yields data matched to the learning objectives, data that will be used by provider and supporter alike to improve the CME experience, uncover needs, and satisfy learner expectations. All will benefit when outcome measurements are better and more reasonably aligned with real needs and expectations at an acceptable cost.
Robert F. Orsetti is assistant vice president, continuing education, University of Medicine & Dentistry of New Jersey in Newark. Orsetti, a 24-year CME veteran, is a member of the AMA's National Task Force on CME Provider/Industry Collaboration. Contact him at (973) 972-8377 or send e-mail to email@example.com. For more of his columns, visit mm.meetingsnet.com.