The very fact that a continuing medical education activity is commercially supported has to induce a greater perception of bias in the content, right? That’s something that has been taken as a given in everything from the Senate Finance Committee’s inquiry into commercial support and continuing medical education, to last year’s Macy Foundation’s analysis and conclusions. The funny thing is that, despite its widespread acceptance, until recently there have been no scientific studies proving—or disproving—that pharma or device makers’ support of CME inherently introduces bias into the activity.
That’s why Steven Kawczak, MA, associate director, Center for Continuing Education (CCE), Cleveland Clinic, and his colleagues at the clinic—William Carey, MD, director, CCE; Rocio Lopez, MPH, MS, member, qualitative health sciences, Cleveland Clinic; and Donna Jackman, associate director, CCE—decided to analyze the CCE’s database of evaluations collected from 346 CME activities of all types for the year 2007. The results, which were published in the January issue of Academic Medicine, showed that, contrary to what has been assumed, there was no correlation between the perception of bias and the commercial support status of an activity.
We recently caught up with Kawczak to learn more.
: What prompted you and your colleagues to look for evidence that commercial support does or does not introduce bias into CME activities?
Kawczak: First and foremost, it was how heated the public debates about commercial support and CME had gotten in the past few years. The Senate Finance Committee’s and the Macy Foundation’s analyses and conclusions make very strong assertions and recommendations about changing the whole paradigm of CME [because of inherent bias caused by commercial support of CME]. Yet there weren’t any studies being done specifically on whether the presence of commercial support creates bias.
We saw this as a glaring gap that we wanted to narrow with actual data.
MM: The underlying assumption in political, medical, and regulatory circles seems to be that commercial support leads to biased CME. Did you have any preconceived notions beforehand on what the data might show, either way? If so, were you right?
Kawczak: Our hypothesis was twofold. First we wanted to discover whether there was a higher percentage of bias within activities that were commercially supported than those without commercial support. We then took it one step further and looked at whether activities that were produced from a single funding source would be more susceptible to commercial bias than those with multiple sources of funding or those without any commercial support.
We thought we would find the highest rates of bias in activities with a single funding source and the lowest rates of bias in activities with no commercial support, and that multiple-supported activities would fall somewhere in between. We also thought we would find a higher rate of bias in activities that were commercially supported than in those that didn’t have any commercial support.
While we are proud of the fact that we’re very strict about complying with the Accreditation Council for CME’s Standards for Commercial Support, we thought if we tested the data, we would probably find these hypotheses to be true. In fact, we found the opposite—that commercial support did not result in a perception that the activity was biased.
MM: What did you find most interesting about the results of your analysis?
Kawczak: While our results didn’t produce statistically significant differences between the levels of commercial support, I think the real numbers do point to some interesting findings. The most surprising was that activities that have absolutely no funding associated with them—activities for which the Cleveland Clinic as a CME provider absorbs the cost within its operations—had a higher level of perceived bias than the other two categories. It was surprising because 1) it indicates that perceived bias is not associated with industry support; and 2) it points to bias in content being caused by something other than the funding source of the activity.
Our data ranked single-funded activities—which we thought would be most at risk of bias—as being most free of bias. Activities that were multifunded fell in between.
MM: Do you apply the same anti-bias measures for all activities, regardless of their financial support sources?
Kawczak: While we do follow the Accreditation Council for CME’s Standards for Commercial Support for all activities, we have a higher degree of scrutiny for activities that are single-funded. Our guidelines for single-funded activities include institutional approval of the project concept, an independent review panel to ensure content integrity, and close monitoring of activity implementation. Included on our review panel is a scientist/specialist in the field who has no affiliation with the activity and no ties with industry that would cause a conflict of interest. This person reviews all the content (proposed slides, draft text, etc.) before the activity production/implementation in order to screen for any degree of bias, including subtle bias only a content expert could pick up on. Our CME office also pays close attention to design and color choices for the educational materials to ensure that nonverbal commercial bias is avoided. Due to this process, we produce CME activities with a very low level of bias.
MM: Did you also examine the data to see if the perception of bias varied among the types of activities, for example comparing singly funded live CME to live CME with multiple supporters or no external support?
Kawczak: We did look at it in terms of variation in types of activity and funding, and did not find any differences in types of activities.
MM: How much of an undertaking was it to produce your analysis?
Kawczak: We’re a big provider and have a large infrastructure in place for data collection, so we were well poised for researching our CME activities. We looked at our database of administratively closed activities in 2007, which came to just under 350 activities with a little over 95,000 participants. Over 70 percent of them completed the evaluation process. When we saw that, we realized that we had a large sample, a great response rate, and an array of activities produced (live courses, online CME, enduring materials, and journal CME), which offered a good data set for analysis. Further, we were able to stratify these activities in a range of how they were funded from having no funding at all, multiple supporters, and a single source of commercial support.
MMKawczak: What aspects of the results do you think are most important for those involved in regulating the commercial supporter/CME provider relationship to know about?
Kawczak: Policymakers need to pay attention to this sort of data because, if they’re looking to evaluate issues of bias, this shows that it’s not coming from the pharmaceutical industry’s provision of educational grants. What that funding is doing is allowing us to produce great education. Instead, policymakers should review what good CME providers do (the ones that follow the rules) and emphasize their best practices as examples to follow. Also, we all have to realize we’ll never be able to reduce bias to zero—everyone, and learners alike, has some level of bias in their views. The results of our study show that the effect of industry support on participants’ perception of bias within CME activities is minimal. Further, CME providers that have suitable oversight to ensure compliance with the ’s SCS can be successful in implementing commercially unbiased education, regardless of funding source. This quite conclusively shows that the prohibition of commercial support is not needed.
MM: Do you believe it would be beneficial for providers in other settings to do similar studies?
Kawczak: Yes, for sure. We need to have more scholarly studies conducted so we can see what other providers are doing, both to get a view into their systems and to see what they are producing in the way they manage bias in CME. There have been two other studies published recently: one, also published in the January issue of Academic Medicine, was done by the University of California, San Francisco, in collaboration with the Veterans Administration of San Francisco. Medscape did one a few months earlier. They were structured a little differently from our study, but they were trying to answer the same question: Does the presence of commercial support introduce bias into CME? They also found no real correlation.
There’s also an opportunity to see if different learner types get different results—for example, comparing activities covering more medicine-driven topics versus surgery. It would also be good to look at specialty or type of content as well. The more granular the better. If a provider is already adhering to the ACCME’s Standards for Commercial Support, chances are they’re already gathering a lot of usable data from their activities. It should be straightforward to structure a study.
If more providers conduct similar studies, we can better see the big picture of CME and find out the rates of bias across the spectrum of providers. Discussion can continue about what the presence of commercial support may or may not do. So far, the data shows it has not had a negative impact on learners.