EACH PHASE of the ongoing effort to perfect CME and its rules of governance creates subject areas that stir debate. Early on we were captivated by a new lexicon of terms such as independence, objectivity, balance, bias, and scientific rigor. More recently, disclosure and conflict of interest were themes du jour. Presently, return on investment (ROI), return on education (ROE), and outcome verification have taken center stage.

Reasonable Expectations

Two concepts are logical and clear. It is perfectly reasonable for those who fund CME to expect some measure of return on money spent, whether it is in the form of increased product sales; more appropriate diagnosis, treatment, and disease management; or other potential outcomes.

And we have an obligation to attendees, faculty, and the public to demonstrate, if not conclusively prove, that CME imparts learning and results in behavior change that is good for patients. However, while it is proper for CME providers to conduct evaluations to measure attendees' learning, application of knowledge, and improved patient care, tracking of participants' increased prescriptions after a CME event is an activity best suited to the supporting company.

Beyond Seats in Seats

It is no longer acceptable in CME to measure success by confirming that the meeting room chairs and temperature are comfortable, or to rate speakers on sliding scales of quality performance. Ideally, the effectiveness of CME activities must be measured by pre- and post-event evaluation of knowledge levels tied directly to the specifics of content presented. Providers need to ask: Has the CME experience contributed to the knowledge and abilities of the listener and, if so, is the new information directly applicable to patient care and practice behavior?

Use the Data

There is, however, a further obligation. The evaluation process is not complete if the endpoint is merely data collection. Data that are collected, tallied, and then reside in a computer or file cabinet serve no useful purpose. Data representative of CME experiences, whether positive or negative, must be put to use. Positive outcomes that result in improved patient management or in some other way contribute to better medical practices can be used in a variety of ways. Here are some examples. Positive outcomes can be

  • models for reinforcing future learning experiences and building new programs;

  • repurposed in other delivery formats;

  • sources for product research, enhancement, and marketing;

  • applied to faculty development;

  • adapted for patient instruction and patient counseling.

Negative outcomes should signal the need for program improvement. Use these results as catalysts for modifications in needs assessment, topic and content development, faculty selection, and, perhaps, general activity planning and organization.

Outcome findings, analysis, activation, and application will benefit physician learners, improve medical practice, satisfy regulators, improve the CME product, validate funding decisions, and result in future support.

Robert F. Orsetti is assistant vice president, continuing education, University of Medicine & Dentistry of New Jersey in Newark. Orsetti, a 24-year CME veteran, brings a broad perspective to his column, having served in a variety of settings, including pharma companies and education firms. He is a member of the AMA's National Task Force on CME Provider/Industry Collaboration. To contact him, call (973) 972-8377 or send email to orsettrf@umdnj.edu. For more of his columns, visit meetingsnet.com.

What did you think of this article? Please send your comments/suggestions to Tamar Hosansky, and include the article's headline in the subject line of your email.