Continuing medical education professionals from around the world gathered via the #CMEChat hashtag on Twitter on December 7 to talk about what can be done to improve how new information flows from annual conferences to local practices. They used Medical Meetings’ November/December issue’s cover story as a jumping-off point. As one person pointed out, we need to find a way to curate the data presented at annual meetings more effectively.
They began by looking at what might be the best models for disseminating new medical information beyond major medical meetings, with “best” defined as being able to spread the word quickly to the widest possible audience while maintaining the integrity of the data. For the sake of discussion, the group decided not to complicate the process by saying the data would have to be available for CME credit; as one person said, “Credits have never been proven to motivate learning. They have been proven to complicate education.” Another chimed in to say that adding accreditation to the mix would just slow the dissemination process.
Most appeared to agree that medical societies and associations should develop clearinghouse models that enable rapid and open access, and that authors should take responsibility for ensuring that questions about the data they present are answered, at least during a specified period of time. Chat moderator and MM columnist Brian S. McGowan, PhD, suggested that local institutions could use technology to pull new medical information into their networks, and that these institutions could then stream feedback into the clearinghouse system to refine the model over time. One person suggested using bloggers as disseminators. While one participant was concerned about the possibility of bias that could come from filtering the information through a blogger, another pointed out that there could be value in having that personal point of view.
It’s vital, said one, that the data go where the learners are, instead of trying to make learners come to the content. Learners also should be able to access the system whenever is convenient for them.
Pros and Cons of Slide Decks The moderator then asked about the pros and cons involved in having medical associations create and archive core slide decks that would become available when the data is released. “Archiving is critical,” said one person, as is organization and searchability so it’s easy to pull up X talk at Y meeting about Z topic. Otherwise it can be too challenging to find information that is applicable to the specific needs of someone’s practice.
Having a data clearinghouse as the core model not only would ensure data fidelity, but it also “would save tens of millions of dollars,” said one participant. However, said another, while a clearinghouse model works in an ideal world, there already are “10 million ‘clearinghouses’ for primary care docs.” Another countered that, while there are many channels, there currently is no true clearinghouse for new medical content. One possibility would be for each medical society to have its own clearinghouse. It would be up to the medical society to control the quality from the start, so that the data stream doesn’t become polluted with too many insignificant data points. Of course, there still can be disagreement among associations, such as the difference between the American College of Physicians and the American College of Radiology on when to require mammograms.
Of course, one practice’s insignificant data points is another’s very significant data points, depending on that practice’s needs. This led the moderator to ask the chatters to guesstimate the balance of core content to local content in an average CME program. They seemed to agree that for national meetings, the core-to-local ratio would be high, “at least 90-to-10 if not higher.” But that’s as it should be, said one person: “Local content = context. Core content shouldn’t be different anywhere.” At a minimum, said another, the intro and background sections could be shared. While one would think that there would be more contextual information coming from regional meetings, that’s not always the case, said one person.
Authors as Data Shepherds One point that kept recurring was that it would be key to any data dissemination model to have faculty easily accessible to answer questions. “In many cases, Q&A has the most valuable information for the learner,” said one person. And, said another, it’s when “participants wake up.” Data shepherding currently is done through things like letters to the editor in a journal. The problem is that it can take five months to get an answer, and it’s behind a paywall to boot, which further limits access. A better way would be to have faculty or an editor on retainer to prompt timely responses to questions, say up to six months post-release. Learners could ask their questions online (possibly even through social media), then their answered questions could be included in the archive, along with the data. One of the CME chat participants mentioned he has been doing this on his blog by posting questions he has posed to the Accreditation Council for CME, along with their answers. (Here are a few examples: Needs assessment collaboration and independence; incentivizing surveys; accreditation language; defining commercial interest). But, he added, it would be better if the ACCME did the archiving. “Without curation, the system fails to help.”
But who is to do the curation? Should/could the vetting of core slides be done via crowdsourcing? The crowdsourcing could be done through Slideshare, blogs, tweets, Prezi, podcasts, simulations, etc. But then someone would have to take it all in and organize it. One person thought it should be up to each “house of medicine” to control and refine the new medical knowledge as it is vetted; “We can then use this data to educate.” Another participant thought the CME community could use an “uber curator” (which, along with being a good idea, someone thought would make a great name for a rock band). What we need, said one person, is to put a mashup of ACCME, PubMed, SoMe sites, and IBM’s Watson into a blender. Once we work through the barriers of cost, determining what the core content would be, gaining faculty support, etc., what would we end up with? A newly re-engineered data stream.