A recent systematic review of randomized clinical trials examining the effect of feedback reporting compared with usual care found an overall positive effect on measured outcomes. However, the impact varied considerably across the included studies, ranging from no effect to relatively large effects (Ivers, et al., 2012). We do not know enough about how, when, and why some feedback reports can achieve a more significant impact on improvement. We therefore have much to learn collectively about how to optimize feedback reporting (Ivers, Grimshaw, et al., 2014).
To help ensure that future research on feedback reporting adds to our understanding of what works and why, its focus needs to pivot from comparing feedback reporting against usual care to explicitly evaluating different features and types of feedback reporting. We propose three priorities to consider when planning or funding research to improve the design and delivery of feedback reporting; they derive in part from a panel discussion at the 2015 AHRQ Research Conference (De La Mare, et al., 2015).
- Section 4-1: Understanding key attributes of highly successful feedback reporting systems
- Section 4-2: Understanding the implementation contexts that promote highly successful feedback reporting systems
- Section 4-3: Understanding factors that affect both the design and context of feedback reporting systems
4-1. Understanding key attributes of highly successful feedback reporting systems
For example:
- What measures are relatively sensitive to physician behavior change as a result of feedback and therefore might be prioritized over other measures (Van Der Veer, et al., 2010)?
- Is it more effective to focus on performance related to a narrow set of measures that allow more targeted reflection or to capitalize on a physician’s attention and give feedback on a much broader set of measures?
- Does physician engagement vary by how the measures are framed? By the type of comparator or benchmark used? By whether summary or composite measures are included? By whether outcome measures or process measures are used? By whether a specific feedback report is linked to board certification requirements, such as the Maintenance of Certification requirements of one or more of the 24 member boards of the American Board of Medical Specialties?
- What is the optimal level of aggregation at which feedback should be delivered? For what type of measures is it more effective to deliver feedback to an individual physician versus to a team?
- What is the best way to include goals and action plans with feedback reports, and how specific should they be?
- What is the ideal way to deliver feedback to physicians? Although face-to-face feedback is often considered ideal, competing time demands on physicians may make this impractical, so evaluating the impact of innovative and nontraditional ways to provide feedback would be useful.
- How can the content, design, and delivery of feedback be personalized to the recipient physician, and to what extent does such tailoring affect outcomes (Landis-Lewis, et al., 2015)?
4-2. Understanding the implementation contexts that promote highly successful feedback reporting systems
For example:
- How do physician characteristics influence reporting effectiveness? Are physicians more likely to respond favorably to feedback reports than other clinicians? Are there differences among specialties? To what extent is effectiveness a function of the time demands on physicians (Van Der Veer, et al., 2010)?
- How does the setting influence effectiveness? While feedback reporting is a highly adaptable intervention, are some settings, such as medical practices, more amenable than others, such as long-term care facilities?
- What is the impact of co-interventions, such as financial incentives, clinical decision support tools, and practice facilitation (Teleki, et al., 2006)?
- What are the key barriers inhibiting the use of reports by individual physicians and how might they be overcome (Teleki, et al., 2006)? In particular, how often do individual physicians receive multiple provider feedback reports from various developers (e.g., health plans, regional health care improvement collaboratives, professional societies) and to what extent do the scores conflict?
4-3. Understanding factors that affect both the design and context of feedback reporting systems
For example:
- What is the benefit of feedback reporting in terms of improved outcomes, accrued savings, and other goals? Along those lines, what is the cost of developing and implementing feedback reporting systems, and how can these costs be minimized? How does the emergence of the electronic health record enable efficient collection of more complete and nuanced data to support feedback reporting?
- How does the cost/benefit of provider feedback reporting—its business case—compare with that of other performance improvement interventions?
- What is the best way to study these three buckets of priorities? Expert methodologists recommend the following approaches:
- Design head-to-head trials to isolate and examine alternative reporting features and approaches (Ivers, Grimshaw, et al., 2014). This may involve adapting approaches commonly used in the marketing field, known as A/B testing, with the goal of determining which design features and contexts lead to the greatest uptake and use of the data for practice change.
- Identify implementation failures (Grimshaw, 2015). As much can be learned from implementation failures as from successes yet null findings on any topic are less likely to be published and therefore less likely to be included in systematic reviews. To better enable the identification of failures, efforts to more systematically identify and centrally log findings of “no effect” need support (e.g., directives) from research funders and journal publishers.
- Conduct outlier research, i.e., focus on implementation cases associated with best and worst outcomes (Hysong, et al., 2007).
- Sufficiently and consistently document each intervention and its implementation context in order to increase the reproducibility of the intervention, support its inclusion in meta-analyses, and better enable an analysis of the generalizability of the results (Van Der Veer, et al., 2010). Peer review journals could make this documentation available as online appendixes.
A secondary aim of this guide is to foster discussion about research priorities among report developers, the broader quality improvement and dissemination and implementation communities, researchers actively working on the topic, and funders of research and improvement initiatives. We hope they will find ways to collectively advance these reports as effective instruments for influencing physician behavior and improving care.