Confidential feedback reporting is widely considered to be a precursor to and a foundation for performance improvement. However, to enable change, the physician responsible for and capable of change must receive, understand, and act on the information. Physicians need more than mere data.
In recent years, we’ve observed insufficient attention being given by organizers of various performance improvement activities to the particulars of how to effectively convey performance data to physicians and other clinicians. In some cases, those in the field make no distinction between confidential feedback reports for clinicians and public reports for consumers, yet the two audiences couldn’t be more different in their informational needs. Moreover, it appears that inadequate attention is paid during the selection of measures and design of the reports to the mechanisms by which the reports could lead to improvement through changes in clinician behavior.
In addition, there is little acknowledgment that feedback reporting can be done either well or poorly. Too often, it is wrongly viewed as a dichotomous variable; was performance data made available to clinicians—yes or no? It is inappropriately dismissed as a small detail to be checked off instead of a bridge that can lead to either performance improvement success or failure, depending on its underlying architecture.
It turns out that confidential feedback reporting for physicians is a well-studied topic; in fact it’s one of the most studied performance improvement interventions (Ivers, Grimshaw, et al., 2014). Yet some developers of feedback reports are surprisingly unaware that an evidence base of best practices exists to guide them. This is the case even for some well-resourced, large-scale interventions.
Part of the problem is that different applications, most notably the fields of quality improvement and dissemination and implementation of clinical advances, use different terms for essentially the same activity of reporting, which undermines the spread of learning. Relatedly, academic journals—the curators of the evidence—tend to use the term “audit and feedback,” which is not recognizable to many working in the field, further limiting uptake.
Last but certainly not least, we in the health services research community haven’t done a very good job of translating and communicating what we do know to those who can benefit most from the evidence, i.e., developers of feedback reports.
This applied resource seeks to address these shortcomings and inform developers of feedback reports about evidence-based strategies to consider when developing or refining a feedback reporting system. This guide is appropriate for many audiences, including medical groups, health plans, payers, professional societies, regional quality improvement collaboratives, and dissemination and implementation campaigns. In pulling this resource together, we explicitly seek to dismantle “language silos” and use terms readily understandable to all those working to improve the performance of our health care system.
A secondary aim is to foster discussion among report developers, the broader quality improvement and dissemination and implementation communities, researchers actively working on the topic, and funders of research and improvement initiatives. We hope they can work together to set research priorities that will collectively advance these reports as effective instruments to influence clinician behavior and improve care. Subsumed in this aim is interest in actively discouraging research that uses scarce research dollars to retest what are considered settled areas of inquiry.
Peggy McNamara, Dale Shaller, Jan De La Mare, and Noah Ivers
March 2016