Editor’s Note: A reminder that the views expressed are those of the author and not necessarily of the Family Medicine Division or Residency Program.
One of the reasons I loved our family medicine residency program on the day I interviewed here was the very obvious focus on quality and evidence in patient care. This showed up in a bunch of ways. The first event of the day was the Department’s grand rounds. I quickly forgot how nervous I was about the interview day as one of the third year residents presented on the latest guidelines regarding pediatric urinary tract infections. One of the leaders of the department raised her hand midway through, and questioned one of the recommendations. The third year resident coolly reviewed the data behind the recommendation that she was questioning, smiled, reviewed the data behind the previous recommendations that were going out of favor, and moved on in his presentation without missing a beat.
Later in the morning we met with the program director and I heard for the first time about the clinic quality meetings that our department held monthly. It seemed like a dream come true: a dedicated day each month where we could review the latest evidence behind what we were doing in our continuity clinic, review where we—as a clinical system—were falling short with regards to preventive care guidelines, and participate in projects designed to improve those measures.
In my afternoon conversations with the residents and faculty, I came to understand the program’s focus on quality improvement extended beyond individual practice, beyond clinical systems, and into the residency itself. The faculty and program directors seemed dedicated to hearing feedback and incorporating it into the changes they made in the curriculum and the individual rotations. All of this together made the University of Utah an ideal place for me to be trained in family medicine.
After almost two and half years in the program, I still think this is true. We continue to have excellent grand rounds every week that focus on bringing new perspectives to our practice, which are always based on the latest evidence. Additionally, I am asked to fill out surveys (on what seems like an hourly basis) about how to make our rotations and residency program better. However, there’s something that’s put a dent in my enthusiasm for clinic quality improvement!
At our monthly clinic quality meetings, we always spend a chunk of time reviewing our patient satisfaction scores for the clinic as a whole. During my first year our scores were as good as they could be. The chart displaying this was entirely green, representing above average scores for patient satisfaction compared to the rest of the University clinics. We congratulated ourselves and moved on. Then, about halfway through my second year, we started having more white spots on the chart, which signified closer to average patient satisfaction compared to the other clinics in the system. As second year turned into third year, some of the white boxes turned into red boxes, as patients were now ranking us below average. People were obviously concerned about this change and we’ve spent hours in our meetings trying to understand it. Multiply those hours by the 50 people in the room and we’ve spent hundreds of person-hours trying to figure this out.
In the meantime, I’ve spent a lot more hours in the clinic: from half a day per week intern year to five or six half days per week during third year. In seeing many more patients over this time, I’ve become less interested in patient satisfaction scores. There are a lot of reasons for this. I can easily make a patient happy by giving them antibiotics for an upper respiratory infection if that’s what they come into the clinic wanting; however, plenty of evidence says that is a horrible idea. The alternative is to spend an extra 5 minutes discussing the reason why antibiotics are a bad idea and maybe still not make the patient satisfied. Occasionally, that makes me late for my next patient, who could lower the score they give on their patient satisfaction survey for the 5 minutes extra they had to wait. I can easily make a different patient happy by not adding another diabetes medication if they say they want to spend another six months attempting dietary changes to fix their blood sugar control, but odds are that that if we do this, their diabetes will be poorly controlled for another half year of their life.
There are infinite examples of this conflict between patient preferences and best practices. So we should be completely unsurprised when a 52,000 person study shows that patients who give high health care satisfaction scores are more likely to need to be admitted to the hospital, have more money spent on their health care needs, have more money spent on their prescription medications, and are more likely to die, when compared to patients who don’t give high patient satisfaction scores.
I understand that every health care system is going to be collecting more and more data about how satisfied patients are, as well as how closely providers are adhering to evidence-based standards of care. I also understand that we as providers, clinics, and health care systems will continue to be evaluated, and compensated, based on these numbers. Although I welcome and will enjoy the challenge that comes from this process, I think I’m worried about how hard our job will be in my generation of medicine. But I feel as if our residency program is preparing us for these discussions and ideas for practice.
Christopher Belknap, MD is a 3rd year resident at the University of Utah Family Medicine Residency Program.