EVEN WHEN PHYSICIANS perceive commercial bias in a certified CME activity, the majority of them still consider the content valid and credible. That is one of the startling results from a survey of CME participants, presented at the Alliance for CME annual conference in January.
Conducted by Jeanne Cornish, RPh, senior vice president and CME director for intellyst Medical Communications, Aurora, Colo., and James Leist, interim director, Alliance for CME Center for Learning & Change, Birmingham, Ala., the survey of 212 healthcare professionals, mostly physicians, sought to ascertain how CME participants define commercial bias, and how that bias affects their learning. The respondents were almost equally divided between men and women; most of them have practiced medicine for 15 or more years, and the majority are internal medicine or family practitioners.
Asked to rate their perception of bias in different kinds of CME activities, fewer than 10 percent of participants responded that they found “excessive” bias in enduring materials and journal-based articles. Despite the small percentage, this finding is important because they didn't find any excessive bias in the other categories: live or online activities.
Overall, most participants reported perceiving some sort of bias in activities, regardless of format. Fifty percent or more of participants found “moderate” or “some” bias in all types of activities; on the other hand, 50 percent or fewer of participants found no bias across the board. Live activities, series (as opposed to live activities, courses) did the best: About 50 percent of participants found no bias in those activities.
Biased Content Still Credible
While the CME community expends enormous energy to keep activities balanced, participants don't seem nearly as concerned. A whopping 75 percent of survey respondents said that they still considered the content of biased activities credible and valid. What isn't clear from the study, Cornish acknowledges, is why they think the content is still valid.
Survey participants were also asked how they viewed the presence of industry reps at CME activities. The majority of respondents saw sales reps' participation as positive or as having no effect — even if the sales people were breaking the rules, by, for example, passing out information on their product. Fewer than 20 percent viewed sales reps' presence as negative “That was one of the surprises of this [survey],” says Cornish. “Hey! You know what this means: Not all participants are aware of the rules.”
Raising Red Flags
If sales reps aren't a problem, what factors do CME participants believe cause bias? First place went to “generalized endorsement or negative statement without factual supporting references,” with 75 percent of participants identifying that factor as an indicator of bias. In second place, 69 percent of survey respondents said focusing on one agent, device, or procedure when others exist was a red flag. But such a focus doesn't have to be a sign of bias, counters Cornish. Given limited time for a presentation, for example, amember might decide not to concentrate on the older agents that aren't used as much and aren't as beneficial for patients. Or, a presenter might focus on one agent because “that agent is unique and so much better, as shown by the evidence, it would be a disservice not to treat patients [with it].” To alleviate attendee concerns, faculty who are highlighting one product should explain why they're doing so, she says.
While many CME providers are concerned about presenters using the grantor's drug's trade name and the generic names of all other products discussed, that issue is less of a concern for survey respondents: Fewer than half (49 percent) identified “inconsistent use of brand names” as an indicator of commercial bias. That's because many physicians, especially primary care practitioners, know the trade names of drugs, but not the generic names, says Cornish.
Other problem areas the respondents identified include misleading titles, learning objectives, and overview and summary statements; failure to review all the pros and cons of a therapy and its competitors; relationships between the grant provider and the majority of faculty on a program; references to inappropriate studies; and a dearth of scientific references to support the message.
It's important for CME providers to educate participants about commercial bias, Cornish says. Start by providing them with a definition of commercial bias. On evaluation forms, rather than asking if the participants felt the activity was biased, ask more specific questions, such as: Did the activity materials accurately communicate the purpose and content? If you did perceive commercial bias, describe what factors you consider to contribute to the bias. Also, have participants evaluate each faculty member separately.
One of the most important preventative measures CME providers can take, says Cornish, is to choose qualified faculty to review content. “If you don't have [content reviewers] who are aware of general accepted practice in the topic area, then they may require changes that are inappropriate,” she says. If, for example, it's inappropriate to recommend older agents as standard therapy, or if certain products have a high side-effect profile, presenters should not be required to spend an equal amount of time on those treatments.
Make sure faculty who are reviewing content have the depth of experience to give that overview, she says. “Do they sit on national committees, where they look at different guidelines? Do they have financial relationships with more than one pharmaceutical or medical device company, so they do keep that neutrality there? Are they receptive to looking at needs assessments and to making material really applicable to your target audience?”
When recruiting faculty, spell out what your expectations are. “We explain how we define the characteristics of a well-balanced, non-biased accurate activity. We'll say, ‘Please make sure that you are referencing studies that are from peer-reviewed journals. We don't want to see references to only one study that was funded by the grantor.’”
Data for the Dogs
What do you do when faculty submit biased presentations? It helps to select a faculty chair whom you can trust — because the chair is the one who will have to talk to those faculty members “who may not be in it for the right reasons,” says Cornish. Doctors tend to listen to an expert in their own field.
It's also important for CME providers to evaluate the material, she says, and prepare themselves with the data if they need to ask faculty to make changes. “Don't try to get somebody defensive; back up your recommendations with facts,” she advises.
She learned that lesson through her own experience. “I reviewed [presentation material] once that said one agent was clearly better than the other. I looked at the human data and there really wasn't a difference — [the faculty] were quoting a study done of dogs. So, [if the CME activity was] for veterinarians, it would have been great” she quips. “Unfortunately that wasn't the target audience.”
For the complete survey results, visit www.intellyst.com.