Van Harrison, PhD, with the University of Michigan, kicked off this morning's plenary with a very fast look at a very complicated subject: using a systems-based framework in CME planning to enhance the translation of new knowledge. I'm not going to even try to filter through the 10 pages of notes I took; check the Alliance's Web site in a week to 10 days for a download of his slides. Basically, the gist is that CME doesn't exist in a vacuum, and there are lots of other systems in play (adminstrative, societal, financial, etc.) that can make or break a physician's ability to translate what they learn into practice.
The panelists did a good job of showing some examples of how this works for them. For example, Bernard Marlow, MD, College of Family Physicians of Canada, spoke about a survey they did of family physicians, most of whom work in several different settings, each of which has its own barriers and challenges. The College decided, after viewing the results, to make some changes to accommodate these realities. One is to abandon "one-size-fits-all" CME, since it doesn't address the specific needs of different individual settings. Another is to inject more interactivity into sessions, so physicians can get a more real-life feel for what they're learning. And anothere is to think of clinical practice as "one big classroom," and find ways to give credit for point-of-care learning.
He talked about one CME activity they did, a day-long program about using a certain test which I won't even try to spell at the moment for people with lung disease. After undergoing a one-day, intensive course, the College asked the participants to sign a commitment to change. But six months later, only 40 percent were using the test. The problems they had in implementation had nothing to do with what they learned: They were system problems, such as the procedure not being covered under provincial insurance. It was a big wakeup call on how important things other than CME can be to getting docs to actually use what they learned.
Eric Peterson, EdM, Academy for Healthcare Education, said it's important for CME providers to use their own personal networks of physicians to find out what hinders them from putting what they learn into practice, then package the data in a way that will help pull the academicians back into the reality of day-to-day practice.
Marcella Hollinger, MEd, Illinois State Medical Society, said her organization is starting to see some of its community hospitals melding their CME and Quality Improvement committees, so they are essentially functioning as one committee. So the CME committee can look at QI data, determine that a particular procedure is being overused, design an intervention, and followup with further review of QI data to see if things have improved.
"It's hard to get that cooperation," she said. "We in CME don't always look at the individuals; we're more group-focused, and QI often looks at CME as completely separate from what it does.
One thing her organization has done is to use its relationship with a medical malpractice group in Illinois to spur some changes. The insurance group has a risk management process by which it conducts individual practice audits. It then gives the practice recommendations for improvement, and gives a 10 percent discount to those who pass the audit.
Her organization hooks into that system by partnering with the insurance group in three phases: Phase 1 is asking the practicing physician what he/she will do to implement change; Phase 2 is asking what they learned by undergoing the audit process; and Phase 3 is explaining the changes that were implemented and how they were implemented. They can get up to 20 Category 1 credits for participating in all three phases. The program is just starting for 2006, so she doesn't have any hard data yet.
Drats, I'm out of time. I have to run to the next session, but I'll finish this up (the next two panelists were great), and write about the next one during lunch.