Performance-improvement activities, which take education from the hotel ballroom to the physician's office, offer an exciting new way for providers to offer continuing medical education credit. The question is: How do you do it? A session at the Alliance for Continuing Medical Education annual conference, held in January in Phoenix, showcased how three organizations are putting performance-improvement CME into practice. (For an explanation of the PI process and how to award CME credits for PI activities, see sidebar, page 20.)
Case 1: Surviving Sepsis
The Surviving Sepsis Campaign at the Seton Family of Hospitals, Austin, Texas, aimed to reduce mortality and ensure that physicians provided evidence-based treatment to sepsis patients, and to enable doctors to determine whether their practice fit within the goals set by Seton and the Joint Commission on Accreditation of Healthcare Organizations. Casey Harrison, CME manager, Seton, explained that the Society for Critical Care Medicine had already identified the need for increasing physicians' awareness of septic shock and training doctors to use evidence-based medicine to control the high mortality rate associated with sepsis. Because mortality rates can reach 40 percent to 60 percent if the condition is not identified early and treated aggressively, and because the Seton Medical Center gets many of its patients from other facilities — after the window for early identification and resuscitation has passed — the topic seemed like a good fit for the organization.
Among the program's objectives were to enable physicians to analyze their management of sepsis at the beginning of the program, to understand why early recognition and prompt management of severe sepsis and septic shock are critical, and to recognize the importance of monitoring physiologic indicators of tissue perfusion (blood flow). Physicians were also expected to analyze how they manage sepsis after they completed the Surviving Sepsis Campaign Performance Improvement Activity module and to determine what changes in practice or protocol they needed to make to improve patient care.
Evidence-based performance measures used for the activity included “bundle compliance tables”: tools to help physicians assess their level of patient care. The best-practice care elements are called a bundle and in order to be compliant, doctors must answer positively that they followed all the elements of the bundles or that the bundles were contraindicated. For example: The four-hour bundle for severe sepsis includes diagnosis being made within two hours of triage, antibiotic administration within one hour of diagnosis, and measurement of serum lactate. If doctors complete all three elements, then they are compliant with the bundle. If they miss any, they are noncompliant.
As explained in the sidebar, PI activities consist of three stages. For Stage A, learning from current practice performance assessment, physicians viewed a PowerPoint presentation called Introduction to Sepsis — Definitions, Facts, etc.; identified up to five patients with severe sepsis or septic shock; completed the bundle compliance table for each patient; and completed a questionnaire on attitudes and practice related to treating sepsis and septic shock.
For Stage B, the docs had to complete the educational module, which included viewing another PowerPoint presentation, reading two articles, reviewing the organization's sepsis protocols, and taking an exam. They also had to write a statement about what they planned to do differently due to the self-assessment from Stage A and what they learned in Stage B.
For Stage C, physicians had to identify up to five new patients with the disorder, complete the bundle compliance table for each of these patients, and write a summary of any practice, process, or outcome changes resulting from the activity.
Physicians now use the bundle compliance tables. Mortality has been reduced by 16 percent; and compliance with the bundle tables for both severe sepsis and septic shock at four- and 24-hours has risen from 90 to 100 percent. Interestingly, since physicians now use the bundle compliance tables with their patients, they did not have an interest in completing the PI activity, said Harrison.
Case 2: Managing Hypertension
Physicians at The Prairie Clinic, which serves a small rural community near Madison, Wis., identified a need for a hypertension-management activity by examining patient records in their electronic medical records system. They found that approximately one-quarter of the patients had high blood pressure, and realized that they needed to improve their handling of hypertension. “The clinic identified the problem and came to us, and we mutually decided to move forward,” said panelist George Mejicano, MD, assistant dean for CME, University of Wisconsin School of Medicine and Public Health, Madison.
“This wasn't education for education's sake,” Mejicano added. Since this was a research project as well as a CME project, his team followed 17 different measures in the areas of hypertension assessment, treatment, patient outcomes, and the nursing process, which were identified by both the clinic and the university. The university's objective was to help the clinic staff prepare and implement a PI hypertension project, and the clinic physicians' objective was to be able to quickly identify, assess, and treat hypertension and so reduce morbidity and mortality. In addition to the 17 measures, the activity relied on published evidence-based measures such as the National Health and Nutrition Examination Survey and other published studies.
Because there were goals set for both the organization and the individual physicians, Stage A included taking baseline measures for the clinic and individual docs, providing lists of hypertensive patients to physicians, and establishing PI goals for the clinic. The physicians attended a one-hour orientation session, went over the lists of hypertensive patients, completed a current-practice questionnaire, reviewed baseline data, and established personal PI goals.
For Stage B, the docs attended half-day educational sessions that included joint patient consultations and case discussions with an invited expert, “basically, academic detailing,” said Mejicano. They also went to a one-hour lecture, and reviewed interim PI data and adjusted their goals as needed.
For Stage C, they reviewed year-end PI data, prepared a summary about the activity and their participation in it, and went to an end-of-the-year project meeting. They used learning logs to document progress throughout the three stages.
Physicians became more aggressive in treating hypertensive patients, and clinic nurses were trained or retrained in hypertension. Also, the team identified other problems, like miscalibrated or malfunctioning sphygmomanometers (instruments that measure blood pressure), and so they recalibrated equipment and purchased new devices. They also changed the blood pressure measuring procedure, developed a new protocol, and adjusted the electronic medical record to indicate a blood-pressure goal for patients.
In keeping with the new Accreditation Council for CME accreditation criteria, the activity identified barriers outside of the physicians' control, such as patients' failure to adhere to recommended medications or lifestyle changes. It also dealt with environmental issues, such as physicians not having enough time with patients or a follow-up tracking system, and doctors' own barriers, such as their lack of knowledge about when it is appropriate to deviate from guidelines, and by how much.
Case 3: Diabetes Care
The American College of Physicians' focused on closing the gap in diabetes care through a practice-based, team-oriented, quality-improvement activity. The education was based on the Chronic Care Model, developed by the MacColl Institute for Healthcare, which identifies the essential elements of a healthcare system that encourages high-quality chronic disease care; and the Plan Do Study Act Cycle, a model for quality improvement. The activity consisted of three training sessions, two conference calls, and a lot of checking up in between, said Cara Egan Reynolds, MHS, grants manager, clinical programs and quality of care department, with the Philadelphia-based ACP. Her team identified the need through the Institute of Medicine's 2001 report, Crossing the Quality Chasm: A New Health System for the 21st Century, a review of the evidence-based literature, government statistics on diabetes care, and measures of quality from the Centers for Medicaid/Medicare Services/AQA Alliance (formerly known as the Ambulatory Care Quality Alliance)/American Board of Internal Medicine.
The objectives were to apply the fundamentals of the Chronic Care Model and systems change, meaning that participants were expected to examine and redefine the roles of their staff as well as the relationship of their practices to other groups, such as ophthalmologists, podiatrists, or community programs that help patients with diet and nutrition. They also had to use the latest evidence-based standards for measuring practice; build team skills (the study teams included docs, nurses, administrative staff, and other allied healthcare professionals); implement quality-improvement activities tailored to the practice; and improve overall care for diabetes patients. Required evidence-based performance measures included hemoglobin A1C (blood sugar) testing and levels, lipid testing and levels, and blood pressure measurements; there also were optional measures such as foot and retinal exams.
For Stage A, the 53 practice teams, each of which cared for 300 to 500 diabetic patients, had to review the latest standards of care, do a pre- and post-intervention patient satisfaction survey for 100 randomly selected patients, abstract baseline data from 25 randomly selected charts, and prepare a preliminary plan for practice change. To make sure they knew they were undertaking a nine-month process rather than a traditional CME activity, participants were asked to sign a letter of agreement, Reynolds said.
For Stage B, the teams learned about how they can make change actually happen, standards of care, measuring practice, and teamwork. They then implemented their practice change plans, targeted outcomes, and adjusted and remeasured their targeted outcomes.
For Stage C, they abstracted data from another 25 randomly selected patient charts, presented progress on their quality-improvement plans, analyzed successes and barriers, and learned self-management and how to get patients to play an active role. They also received a final report from the American College of Physicians on their overall practice performance.
Participants found it eye-opening to work as a team, learn improvement strategies, and review charts, said Reynolds.They also showed a strong desire to change their practice once they were given the tools to do so. “They were proud to be able to recognize problems in their own practices and those of others on their teams,” she said.
While the three presenters were pleased with the results of their forays into PI CME, they found the process does present challenges. For Reynolds, resources were key. “Because we were inviting 150 people to come to Philadelphia three times over nine months, it was cost- and labor-intensive.” To make it less so for future endeavors, ACP is exploring ways to use regional chapters and Web-based initiatives. For Seton, where participants didn't complete the activity, the obstacle “was the mind-set of physicians,” said Harrison. “Because it was 20 hours of credit, they thought it would take 20 hours to do, and that was too big of a commitment.”
For providers who are interested in creating PI initiatives, Reynolds suggested “keeping physicians and their teams from trying to reach too far, too fast. We did see some burnout from those who went back and tried to fix everything at once. Encourage them to start slow and simple and build from there.” Mejicano's advice is to dive in. “We have to immerse ourselves in the quality process,” he said. “Stop thinking about it so much and just do it.”
How PI Credit Works
Performance improvement activities, where individual doctors or groups of physicians track and evaluate a specific practice area, have been eligible for American Medical Association Physician's Recognition Award Category 1 credit since September 2004. At the Alliance for CME session on PI activities, moderator Sue Ann Capizzi, MBA, associate director, Division of Continuing Physician Professional Development with the American Medical Association in Chicago, explained how the system works.
There are three stages in the PI model, each of which are worth five credits, with another five awarded for completing all three stages. The three stages are:
Stage A, learning from practice performance assessment;
Stage B, learning from the application of PI to patient care; and
Stage C, learning from the evaluation of the PI effort.
While Stage B may involve a traditional activity such as a workshop or conference, the intervention involves quality-improvement measures as well, she said.
The provider requirements for PI are the same as the core requirements for any PRA Category 1-level activity, she said. The provider also must use evidence-based performance measures and give physicians background information on the measures as well as clear instructions. In addition, the provider must validate participation by reviewing the PI documentation. “Performance improvement is designed to be a long-term process, not a single activity,” she explained. “It must be designated in advance; you can't get credit for it retroactively.”