Most would agree that the trend toward evidence-based medicine, and toward evidence-based continuing medical education, is a good one, that doctors should be basing their treatment decisions on the best available data, rather than anecdotal reports or their own personal experience. The prevailing wisdom also is that EBM is less likely to be influenced by the drug industry since it's based on scientific fact. Spurred on by the updated Accreditation Council for CME Standards for Commercial Support, now in effect, CME providers have been working diligently to make their CME more evidence-based and bias-free through stronger peer-review and content-validation processes. Sounds good, so far.

The hitch is that now, with the recent flood of news reports about how pharmaceutical companies have been suppressing clinical research and manipulating data to present their drugs in the best possible light, you have to wonder if what you're doing to ensure bias-free CME is enough, or if you're just candy-coating a bitter pill.

Consider these examples: U.S. Congressional leaders and regulatory agencies are investigating allegations that Merck cherry-picked only the best results from its Vioxx studies for publication. Pfizer is accused of publishing only the first six months of a 12-month Celebrex study because those results showed its drug was better than ibuprofen for reducing GI bleeds. After a year, the study results showed that it was worse than ibuprofen (see page 40 for more examples). This news shakes the public's confidence in medical research and education — and it's also not the type of thing a peer review panel is likely to catch.

While critically reviewed clinical data obtained from a systematic search process is only one of three components of EBM — the other two being how to apply the evidence to individual patients based on the clinical context; and the individual patient's concerns, beliefs, and values — it is the component most often taught in CME.

IS THIS MY PROBLEM?

Last time anyone checked, CME providers hadn't yet been required to grow an antenna that can detect clinical data manipulation, or to somehow be able to intuit that there's another study with conflicting data that never saw the light of day. How much do you need to worry about this type of hanky-panky at the clinical trial level?

After all, as Nancy Davis, PhD, director, division of CME with the American Academy of Family Physicians, Leawood, Kan., points out, “the reality is that we don't even have evidence in a lot of areas, and sometimes that evidence is flawed, and sometimes new evidence comes along that supercedes the old evidence — hormone replacement therapy is a great example of that. So we keep researching, keep studying, and keep trying to find the truth.”

However, perhaps CME providers do need to work on growing that antenna, or at least on developing a better understanding of the levels of evidence most likely to be reliable now that the push toward EB CME is gaining steam. Which leads one to ask: How can you ensure your evidence isn't tainted?

EVIDENTIARY QUALITY CONTROL

“We put the imprimatur of evidence-based CME only on the activities that are based on the highest levels of evidence,” says Davis, whose organization pioneered the trend toward offering credit for EB CME activities. David Slawson, MD, B Lewis Barnett, Jr., Professor of Family Medicine, University of Virginia Health System, Charlottesville, adds that EB CME can drive clinicians to use better resources by granting more credits toward activities that are based on high-level, systematically reviewed materials.

These materials are available from sources like the Cochrane Database of Systematic Reviews and the Agency for Healthcare Research and Quality Clinical Guidelines and Evidence Reports, says Davis. The data are reviewed by people who are trained to survey the literature, identify the relevant information, critically read it, assign a level of evidence rating to it, then add it to the database. The levels of evidence assigned to the data range from anecdotal, expert opinion, and research studies on the lower end, to randomized control trials and systematically reviewed evidence at the top.

Slawson is a proponent of using evidence that is based on the Strength of Recommendations Taxonomy, or SORT, which was developed by the editors of various medical journals. SORT relies on three axes: validity, relevance, and cost. Everything gets plotted out on that graph, and CME providers could attach higher levels of credit to activities that use the higher SORT criteria ratings. While this type of system won't eliminate the potential for biased information slipping through, it should reduce the probability of it happening, says Slawson.

Roy Poses, MD, clinical associate professor, Brown University School of Medicine, Providence, R.I., agrees that good EBM critical review can detect most biases, in terms of unintentional problems with the research design, data collection, or analysis that can affect the results. “But it only offers limited protection against intentional manipulation.”

Davis also admits that systematic review processes aren't perfect. “An excellent study that was published last week will not be in the base of evidence — it takes time for studies to go through the process. Another problem is that it's not peer-reviewed often enough. So, for example, the old recommendations for hormone replacement therapy are still in some of those sources because they haven't been reviewed recently enough to take them out.” Even though it's not perfect, she says, “it's the best system we have right now.” (For a list of some of AAFP's approved sources, and an overview of EBM and evidence levels, see page 43.)

Another safeguard CME providers should continue to employ is activity-level peer review, says Destry Sulkes, MD, managing director of MedsiteCME, New York. “The peer-review process is essential. It's not perfect, but it does play an important role in fair balance and conflict resolution.”

Even long-accepted guidelines, a staple of CME takeaways and follow-up reminders, need to be put through the validation mill regularly, say Brown University's Poses and Russ Maulitz, professor of family, community, and preventive medicine with Drexel University in Philadelphia, in a joint e-mail. “Many published guidelines have not been thoroughly based on the precepts of evidence-based medicine, so even if they have been accepted, they bear re-examination. Other guidelines have been written by people with obvious conflicts of interest and should be re-examined on that basis. Even guidelines constructed using the most rigorous evidence-based medicine processes, and written by people with no major conflicts of interest, need to be periodically updated and re-assessed in light of new evidence.”

Jeffrey Lenow, MD, JD, associate professor of medicine with Thomas Jefferson University, Philadelphia, and chief medical officer with Cardinal Health, cites sinus and allergy consortium guidelines that were published a year ago that have “turned our understanding and approach to treating acute and chronic sinusitis upside down. The discipline of EBM will help to continue to challenge these once ironclad assertions — a healthy and necessary process.”

DOCS AS THE JUDGES

But how far should CME providers have to go to ensure the integrity of the data? Wouldn't it make more sense to teach docs how to critically read the literature themselves? As AAFP's Davis says, “One thing [the consumer media coverage of pharma bias in clinical trials and the literature] will do — and maybe this isn't a bad thing — is that it will pressure physicians into being more concerned about what's in the literature and what the research really did say, as opposed to what the morning talk shows say it says. In EBM, physicians need to have a better understanding of how to critically appraise the literature, and learn how to discern good research from not-so-good research in terms of methodology, not just who's supporting the research. While CME can do some of that distillation for them, they're always going to be learning on their own as well, and they have to know how to critically appraise what they read.”

But is it realistic to expect that they will?

According to Slawson, the answer is no. Back in the 1980s, he and his colleagues trained in EBM, including how to teach doctors to critically read the literature. “We realized after doing it for a while that it wasn't practical in the real world. Busy doctors don't have time to read the literature — there's just too much of it being published.”

Kay Dickersin, PhD, director of The Center for Clinical Trials and Evidence-based Healthcare at Brown University, Providence, R.I., and director of the U.S. Cochrane Center, one of 12 centers worldwide participating in The Cochrane Collaboration, thinks it might help to teach docs how to differentiate good evidence from bad evidence at the undergraduate and residency level, rather than wait until they're time-starved practicing physicians. The Catch-22 is that most of those who teach undergrads and residents are getting their training from CME, which doesn't address this issue very often, or is ineffective when it does, as Slawson points out. Poses and Maulitz think the answer is to teach EBM principles from undergraduate school through CME for experienced researchers and physicians, including those who review articles for and edit journals.

SYSTEMS UPGRADE

Still, there's only so much CME providers can do, since the roots of the clinical trial data manipulation and suppression problem are deep in soils far distant from CME. And progress is being made on digging up unethical pharma-physician connections on some fronts. “Right now we're seeing a significant amount of governmental regulatory activity — e.g., the multiple whistle-blower suits against the pharma industry and the settlements in the multimillions of dollars to resolve them — as well as industry adoption of guidelines to ensure appropriate industry behavior (i.e., the PhRMA and OIG guidelines),” says Lenow.

The new ACCME Standards, “which very much tighten the nature of accountability and conflicts resolution,” also are a positive sign, he says. But strong-arm tactics can only go so far in effecting change. “In reality, a major shift in philosophy needs to occur.”

Some organizations and individuals are taking ethical reform to heart. The Cleveland Clinic announced earlier this year that it is revamping its clinical research guidelines to reduce conflicts of interest. Harvard Medical School also addresses the issue by not allowing its researchers to receive significant compensation for their consulting work, or to hold equity in private companies if they are studying something related to those companies. Davis predicts we'll be seeing more ACCME Standards-like disclosure and conflict-of-interest requirements. “More researchers are having to disclose and resolve their conflicts of interest before they can conduct their research. Journals are asking authors to disclose and resolve conflicts before publishing articles, and we're asking them to do it again before they teach it. We're seeing disclosure and resolution of conflicts all the way up the research chain.”

However, Poses notes, “The National Institutes of Health has also just put in stringent conflict-of-interest rules, but some of its researchers have been rebelling, and it's not clear whether the new rules will stick.”

DILIGENT WATCHDOGS

Poses and Maulitz believe that the clinical trial system and the organizations that participate in it enable bad behavior, and that the solution also must be a systemic one (see sidebar on page 40). One thing they'd like to see is “a universal agreement to retreat from claiming that nearly anything anybody does is ‘evidence-based.’ This has become a mantra, and a fairly meaningless one at that” because everything anyone does is based on some sort of evidence. While a rigorous and skeptical review of specific articles may catch some of the data manipulation, along with unintended bias and imperfect research designs, “We clearly need systematic investigation into the threats to the integrity of the clinical-evidence database,” they say. Particularly ripe for scrutiny are those in charge of the organizations that pay for and conduct clinical trials, i.e., pharma and device manufacturers and organizations that conduct clinical research.

Toward that end, Poses, Maulitz, and Wally Smith, MD, associate professor and chair, Division of Quality Health Care, Virginia Commonwealth University, Richmond, have set up a new not-for-profit organization, the Foundation for Integrity and Responsibility in Medicine (firmfound.org) to encourage healthcare professionals and researchers to be “diligent watchdogs protecting the integrity of clinical research, and establish protections for them in this role” since federal whistleblower regulations kick in only when violations relate to Medicaid and Medicare, which often may not apply to clinical trial issues. They and others also regularly discuss these issues on the Health Care Renewal blog, hcrenewal.blogspot.com.

Dickersin also is skeptical that organizations — especially on the sponsoring side — will voluntarily stop the misbehavior. She believes the best safeguard is to register clinical trials at their inception, “so we would know how many were done [and] how many were unpublished, and we could get the results either through the registry or by contacting the authors and investigators.” She adds that although the FDA Modernization Act of 1997 mandated that all trials for serious and life-threatening diseases, including those done by industry, should be registered, “Two studies have been done that have found that industry is only about 50 percent compliant, at best.”

The American Medical Association, based in Chicago, also is a big believer in the capability of registries to preserve the scientific integrity, validity, and reliability of pharmaceutical research studies. According to Robert Mills, senior public information officer with the AMA, “The AMA has outlined the key criteria needed to make such a registry effective. In testimony to a House committee, the AMA said a centralized clinical-trials registry, which includes a mandated mechanism to ensure trial registration, is necessary to truly benefit physicians, scientists, and patients.”

While the International Federation of Pharmaceutical Manufacturers and Associations, along with three other industry associations covering Europe, the United States, and Japan, said last January they will disclose a free, detailed registry of current and completed drug trails on the Internet, “The AMA believes the voluntary nature of this program is not enough. The drug industry will still be allowed to play hide-and-seek with clinical trials under a volunteer program,” says Mills.

“We need laws [requiring all medical studies to be registered], and unfortunately the U.S. Congress has been succumbing to pressure from industry,” Dickersin says. Rep. Edward Markey (D-MA) introduced H.R. 5252, the Fair Access to Clinical Trials Act, and a companion measure, S. 2933, was introduced recently by Sen. Christopher Dodd (D-CT), but they limit required registration to nonexploratory studies. They also allow nondrug, nondevice studies — such as research on getting people to change their behavior, surgery, radiation therapy, and screening tests — to fly under the register screen, according to Dickersin. “Ask your representative to create some legislation with some teeth in it,” she adds.

“You and I can't fix the politics,” says Slawson. “The best way to go is to rely on probability theory. Randomized control trials with concealed allocation (so there is no way of knowing which treatment group patients will be assigned to before the final decision is made to enroll or not enroll them, so that the researchers can't choose who is enrolled in each group) and blinded outcome assessment, even if it's funded by pharma, is still better than a case series that does neither of those things,” he says.

“It doesn't mean you have the truth — you're just farther out on the validity axis,” he continues. “If you have multiple trials from multiple places, some funded by pharma, some not, that's more likely to be a closer approximation of the truth. It's about understanding the hierarchy of truth, and accepting that you'll never have the absolute truth, especially as long as the potential for bias is there.”

HOW MUCH IS HYPE?

WHILE EVERYONE FROM CNN TO CBS has been jumping with apparent glee on discoveries of pharmaceutical industry malfeasance in clinical research trials, one has to wonder how much is media hype and how much should be of genuine concern.

While the general press — which likes to fan any sensationalistic flame it can when it comes to the pharmaceutical industry — has been all over these stories, scholarly journals also are starting to weigh in. For example, Adriane Fugh-Berman, MD, shook things up recently in an April Journal of General Internal Medicine article in which she outlined her experience of a medical education and communication company trying to get her to put her name to a pharma-sponsored paper that would denigrate the products of the pharma company's competitor. The June issue of the Medical Journal of Australia published a survey of specialists, surgeons, and anesthesiologists that found that more than a quarter of those engaged in research said that the first draft of a research report was written by a pharmaceutical company or contract research organization personnel; more than 10 percent said they failed to publish key findings, and some even reported that they edited the results to make the drug look better than the study found, and concealed relevant findings.

Roy Poses, MD, clinical associate professor, Brown University School of Medicine, Providence, R.I.; and Russ Maulitz, professor of family, community, and preventive medicine with Drexel University in Philadelphia, say that while industry meddling in clinical trials is nothing new, the blame traditionally has been laid on the one or two bad-apple researchers who fudged the results. Now, they say, some research sponsors are making efforts to intimidate some researchers into suppressing their results, and researchers' employers are not only failing to fight this pressure; rather, they appear to be collaborating with it.

They say these concerns were corroborated by a recent study in the May issue of the New England Journal of Medicine that found many medical schools and academic medical center research administrators admitting to acquiescing to provisions in contracts with outside research sponsors that could be used to pressure individual researchers into suppressing or altering research results, and the majority were willing to sign contractual confidentiality provisions to keep these tactics hidden.

Even the journals themselves aren't free of conflicts of interest, says David Slawson, MD, B. Lewis Barnett, Jr., Professor of Family Medicine, University of Virginia Health System, Charlottesville. He notes that an editorial in the May issue of PLoS Medicine by long-time British Journal of Medicine editor Richard Smith said that, because journals depend on industry advertising and on selling reprints of articles on trials to pharma, their revenue stream causes an inherent bias.

Maulitz, quoting the cartoon character Pogo, concludes, “We have met the enemy, and he is us.”

THE SKINNY

ON EVIDENCE-BASED MEDICINE

ACCORDING TO THE National Institutes of Health, evidence-based medicine is “the use of current best evidence from scientific and medical research to make decisions about the care of individual patients. It involves formulating questions relevant to the care of particular patients, searching the scientific and medical literature, identifying and evaluating relevant research results, and applying the findings to patients.”

There are numerous levels of evidence, developed by the Centre for Evidence-Based Medicine, Oxford, U.K., in the areas of therapy/prevention/etiology/harm; diagnosis; and prognosis. For the therapy/prevention/etiology/harm area, the highest levels of evidence are “systematic reviews (with homogeneity) of randomized control trials,” followed by systematic review of randomized control trials that aren't as generalizable. The lowest level of evidence is “expert opinion without explicit critical appraisal, or based on physiology, or bench research.” (For the full listing of the levels of evidence, visit www.cebm.net/levels_of_evidence.asp).

To have an activity accredited as EB CME under the American Academy of Family Physicians system, for example, the practice recommendations included in the activity must be supported by evidence that has been systematically reviewed by an EBM source AAFP has determined will provide reliable quality control. Among the approved EBM sources the Academy recognizes are the Agency for Healthcare Research and Quality Clinical Guidelines and Evidence Reports; Bandolier; the Canadian Task Force on Preventive Health Care; the Cochrane Database of Systematic Reviews; the Database of Abstracts of Reviews of Effects; Effective Health Care; the Institute for Clinical Systems Improvement; the National Guideline Clearinghouse; and the U.S. Preventive Services Task Force.