What will have to happen at this session for you to go away satisfied?" asked facilitators Henry B. Slotnick, PhD, PhD, professor of neuroscience, University of North Dakota in Grand Forks; and Judith Ribble, PhD, director, CME, National Center for Genome Resources, Santa Fe, N.M. Now, there's an unusual question. But then, "Yes, But What Did They Learn By Attending?" was no ordinary seminar. It was clear from the outset that the organizers expected the participants to, well, participate.
"I want to learn how to maximize the number of [evaluation] respondents," answered one attendee. Another added, "How do we get people to buy in to the data collection process from the outset?" We came up with six goals--what we wanted to learn about creating a successful evaluation process. Pointing to the list of goals hanging on the wall, Slotnick said, "We all share the responsibility for ensuring that these issues are addressed." And with that statement, Slotnick answered the second question: To get buy-in from participants in data collection, involve them before the evaluation is even designed.
The six goals were our needs assessment. "Evaluations don't necessarily have to be conducted at the end [of a program]," Slotnick pointed out. "Needs assessments are nothing more than evaluations that take place before instruction begins. If people participate in the needs assessment, they raise their expectations of what will happen. Why is this needs assessment hanging on the wall? So you know that we know what you want." To get our questions answered, explained Slotnick and Ribble, we were about to design and implement an evaluation of the very conference we were attending.
Getting the Answers After an energetic discussion dissecting the pros and cons of question formats, we broke into mini-groups. Each group prepared several evaluation questions, which Slotnick and Ribble then compiled and printed for us. In what Slotnick called "a stroke of genius," one group came up with a question asking participants to evaluate our evaluation.
The facilitators gave us a quite manageable assignment: We each had to get two attendees to complete the form. When we returned the next day for part two, we reported our experiences, discovering that we had approached 44 people, and 40 had completed questionnaires--a 91 percent response rate.
Why was our response rate so high? Just as Slotnick and Ribble had gained our buy-in by involving us in the process of designing the evaluation, we had gained the buy-in of respondents by approaching them personally and letting them know their opinions mattered. In fact, said Slotnick, surveying a random sample of attendees can be more effective than surveying everyone. By handing out evaluations to every attendee, Slotnick said, you tend to get responses only "from those who are horribly conscientious" or from those who found something so terrific or so awful that they really want to say something. "You lose those in the middle," he noted. As long as the sample is randomly selected and the response rate is high, Slotnick assured us, the data is considered generalizable to the rest of the population.
Commitment to Change In analyzing our results, we realized that the survey questions elicited potential outcomes. Seventy-five percent of our respondents identified a change they planned to make at their institutions as a result of the conference, and 90 percent of them said we could contact them in three months to see how they were progressing in implementing that change.
At the session's close, Slotnick and Ribble pointed to the list of goals that we had developed the day before. Had we met our objectives? For the most part, participants felt they had learned practical skills and some shared their commitment to applying these skills back at work.
Of course, not everyone was satisfied. One participant had hoped to learn more about measuring outcomes; another wanted more of a theoretical base. One of Slotnick and Ribble's goals was to teach the value of a participatory process by having us experience it. "To just stand and talk is a terrible way to teach adult learners," Slotnick later said. *
Evaluation Study Authors C.J. "Katie" Borkowski, MA, NCC, director of education, American Gastroenterological Association, Bethesda, Md; Margaret M. Cassidy, program coordinator, CME, Duke University Medical Center, Durham, N.C.; Raymond Chevalier, pharmacist, Vigilance Sante, Repentigny, Quebec; Diana J. Durham, PhD, director of education and accreditation, Audio-Digest Foundation, Glendale, Calif.; W. David Gemmill, MD, MS, assistant professor, pediatrics, director CME, Medical College of Ohio, Toledo, Ohio; Debbie Granger, CME coordinator, College of Medicine at Peoria, University of Illinois at Chicago, Peoria, Ill.; Catriona Hill, division of CME, School of Graduate Medical Education, Seton Hall University, South Orange, N.J.; John K. Hurley, MD, Pediatric Nephrology, Kaiser Permanente, Gaithersburg, Md.; Bill Methany, PhD, director, Medical Education, Brown University/Women & Infants Hospital, Providence, R.I.; Barbara D. Mierzwa, assistant dean and director, APFME Office of CME, School of Medicine and Biomedical Sciences, University at Buffalo, N.Y.; Ronald T. Murray, EdD, Affiliated Hospitals Coordinator, University of Virginia Health Sciences Center, Charlottesville; Chris Owner, PhD, associate director, education, American College of Radiology, Reston, Va.; Vickie Phoenix, CME administrator, travel consultant, Ithaca (N.Y.) Center for Post Graduate Medical Education; Kathleen M. Roman, assistant vice president, director, risk management, The Medical Protective Company, Fort Wayne, Ind; Beverley D. Rowley, PhD, director of medical education, Maricopa Integrated Health System, Phoenix; Robert Terashima , MD, FAAP, South Jordan, Utah; Sylvie Trepanier, BScN.MSc, regional manager, CME, Merck Frosst, Kirkland, Quebec; Richard A. Ward, MD, CCFP, FCFP, MainPro-C coordinator, CME, University of Calgary, Alberta; Frederic S. Wilson, customer marketing operation, Health Care Customer Organization, The Procter & Gamble Company, Cincinnati, Ohio.