Ann Lichti, whose voice of practicality and reason in her “CME: In Practice” column is one of the highlights of this magazine for me, in this issue talks about something I've often pondered: Where do you draw the line between evidence-based medicine and physician experience when it comes to making treatment decisions? Or, as Ann puts it, “Is there a danger that the emphasis on so fully evaluating clinicians' management of their patients against evidence-based medicine and treatment guidelines — which the [continuing medical education] community has quickly embraced — will move us further away from the practice of the art of medicine?” Especially, I would add, when the guidelines and best practices continue to evolve, as does the science of medicine itself.

Take the latest skirmish over mammography guidelines. For more than two decades the American Cancer Society has recommended that women should begin having annual mammograms at 40, and that they should be taught to do monthly self-exams. Then this fall the U.S. Preventative Services Task Force came out with a different conclusion: that starting annual screening for breast cancer at age 40 actually causes more harm than good in terms of false alarms and unneeded biopsies. Instead, it recommended starting annual mammograms at age 50 — and foregoing teaching about self-exams altogether. Physicians, and their patients, were in a quandary: Whose guidelines should they follow? And what harm can come simply from the confusion these conflicting guidelines caused? I recently read about a woman diagnosed in her 40s with breast cancer who decided not to treat it until she hit 50. Her reasoning: It wouldn't have been detected until she was 50 under the new guidelines so it wouldn't need treatment until then. That's just crazy, but confusion can lead to crazy very quickly.

Then in January, to much less fanfare, the Society of Breast Imaging and the American College of Radiology announced that women should begin getting routine mammograms at age 40 after all. (ACS stood by its initial recommendations throughout the scuffle.) In the end, the consensus was that physicians and patients should decide what would be best for that individual patient, given his or her history, personal situation, and medical background.

A 2004 article in The New Yorker called “The Bell Curve,” by Atul Gawande, a physician at Brigham and Women's Hospital and the Dana Farber Cancer Institute, also explored where to draw the line between the art and science of medicine. (Search newyorker.com for “Atul Gawande bell curve” for the whole article, if you haven't read it.) This bit really jumped out at me: “The buzzword for clinicians these days is ‘evidence-based practice’ — good doctors are supposed to follow research findings rather than their own intuition or ad-hoc experimentation. Yet [director of Fairview-University Children's Hospital's cystic fibrosis center Warren] Warwick is almost contemptuous of established findings. National clinical guidelines for care are, he says, ‘a record of the past, and little more — they should have an expiration date.’”

CME providers too are being tasked with teaching EBM and the latest guidelines, which may be what's needed to bring those on the low- to mid-range of the practice bell curve to a higher level. But how to judge if the guidelines are in fact best practice for all patients? And now CME providers are also being tasked with measuring how closely healthcare providers apply what they learn. Does this mean you should determine that physicians who use their experience, intuition, patient-interaction skills, and other intangibles to get even better outcomes fall short because they did not apply EBM, even if their patients' health ultimately benefited from deviating from those guidelines?

If you know the answers, please let me know. What I do know is that these questions are of vital importance to physicians, to those who educate them, and to patients — which is to say, all of us.