Just because physician feedback reports can work to improve performance does not mean they always do. The purpose of this section is to identify the factors empirically associated with success—referred to as “best practices”—with the aim of increasing the effectiveness of future feedback reports.
The following sections review best practices culled from the body of literature on physician feedback reporting, in particular the most recent Cochrane review of randomized trials (Ivers, et al., 2012). These evidence-based practices are clustered into four sets of decision points accompanying the development of a clinician feedback reporting system:
- Section 2-1: Identifying a clinical focus
- Section 2-2: Ensuring underlying data support aims of report
- Section 2-3: Optimizing user functionality
- Section 2-4: Delivering to promote impact
Because the evidence base is still developing, each of these four evidence summaries is followed by potential best practices that derive from weaker or indirect evidence, but which some practitioners have found to be useful. The summaries also offer tips from experienced developers, recipients, and observers of feedback reporting. The two categories of guidance, one based on a rigorous review and the other on suggestions, are clearly labeled and distinguished.
2-1. Identifying a clinical focus
There are a variety of clinical areas from which to select a focus for improvement. Table 1 below provides a starting point for eliciting or organizing potential foci of clinician feedback reports. It incorporates both the Donabedian and Institute of Medicine (IOM) frameworks and covers a breadth of potential clinical foci and measures.
Table 1. Matrix of quality measure typologies, illustrated with measure examples
IOM Domains | Donabedian domains | ||
---|---|---|---|
Structure measures | Process measures | Outcome Measures | |
Effective care measures | Example: Cardiac nurse staffing, nursing skill mix (RN/total) | Example: Use of angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) for patients with systolic heart failure | Example: 30-day readmissions (or mortality) for heart failure |
Patient centered care measures | Example: Use of survey data to improve patient-centered care | Example: Patient response to question: Did the nurses overall rating of care treat you with courtesy and respect? | Example: Patient overall rating of care |
Timely care measures | Example: Physician organization policy on scheduling urgent appointments | Example: Received beta blocker at discharge and for 6 months after acute myocardial infarction | Example: Potentially avoidable hospitalizations for angina (without procedure) |
Safe care measures | Example: Computerized physician order entry with medication error detection | Example: Use of prophylaxis for venous thromboembolism in appropriate patients | Example: Postoperative deep vein thrombosis or pulmonary embolism |
Efficient care measures | Example: Availability of rapid antigen testing for sore throat | Example: Inappropriate use of antibiotics for sore throat | Example: Dollars per episode of sore throat |
Equitable care measures | Example: Availability of adequate interpreting services | Example: Use of interpreting services when appropriate | Example: Disparity in any other outcome according to primary language |
Source: Romano, et al., 2010.
Reports can include performance metrics for a broad menu of clinical conditions (e.g., top 10 most frequent hospitalizations) or specialize in a narrow clinical area (e.g., diabetes care).
Physician feedback reports are usually more effective when:
- The targeted clinical measure or suite of measures is perceived as important by the physician (Hayes and Ballard, 1995). This requires not only that the measures be relevant to a particular physician’s practice and caseload, but also that there are sufficient evidence and expert consensus to inform and compile the underlying clinical protocol (e.g., use of ACE inhibitor or ARB for patients with systolic heart failure, from which the measure is derived and compiled [Landon, et al., 2003]).
- The targeted clinical behavior has a low level of baseline performance among physicians, representing an opportunity for significant improvement. The greater the extent to which measured performance is not aligned with best clinical practice, the greater the likelihood that individual physicians will be motivated to set aside their previous (erroneous) self-assessment in favor of that reflected in a feedback report (Ivers, et al., 2012).
Additional factors that may contribute to the effectiveness of physician feedback reports but are based on weaker or less direct evidence, as well as tips from experienced users, include:
- The targeted clinical measure can be influenced by changes in physician behavior. That is, the physician has control over activities that will lead to a better “score” on the report. For example, providing feedback about overall hospital performance may not be relevant or useful to individual physicians who work only in one unit of the hospital (Brehaut, et al., 2016).
- The target clinical performance requires relatively simple behavior changes (e.g., medication prescribing or test ordering), in contrast to more complex behavior changes (e.g., patient-centered chronic disease management) (Ivers, et al., 2012).
- The target clinical focus is not an exceedingly rare event, such as a foreign object unintentionally left in the wound after surgery (Ivers, Sales, et al., 2014).
In practice, selection of the clinical focus of a report is inexorably linked to the availability and quality of underlying data needed to support related measurement, which is discussed in the next section.
2-2. Ensuring underlying data support aims of report
Data on physician performance can be obtained from medical records, registries, administrative or claims databases, observations, or patient surveys. The definitions, advantages, and disadvantages of using alternative types of data are discussed in more detail in a separate AHRQ decision guide, Selecting Quality and Resource Use Measures: A Decision Guide for Community Quality Collaboratives (Romano, et al., 2010).
Physician feedback reports are usually more effective when:
- The underlying data are valid (Ivers, Sales, et al., 2014) and are viewed as credible by recipient physicians (Bradley, et al., 2004), respected peers, and clinical leaders. The following data attributes (Teleki, et al., 2006; Landon, et al., 2003; Shaller and Kanouse, 2012), some of which contribute to data validity and others to credibility, are particularly deserving of the attention of report developers:
- A sample size that is adequate to produce reliable estimates of performance;
- Reasonable procedures for attribution of clinical responsibility;
- Transparent methods and scoring processesii;
- Case mix adjustment procedures that mitigate the effects of patient factors that are not under the physician’s control;
- Explicit discussion of data/measure limitations;
- Accuracy of data abstraction and analysis; and
- Trustworthiness and objectivity of the developer.
- The data are timely and updated frequently (Ivers, et al., 2014; Bradley, et al., 2004). Feedback reports can be delivered periodically or can be designed for ongoing, real-time access (e.g., “dashboards”) if they are built into electronic health information systems. Monthly updates generally are viewed as sufficiently frequent (Hysong, et al., 2006) but the “right” frequency depends on a number of factors. Short reporting intervals may contribute to report fatigue (Brehaut, et al., 2016). Long reporting intervals may fuel perceptions that the data are “stale” and may limit the opportunity for physicians to observe if a practice change they instituted had an impact on performance.
In the case of small practices, less frequent reporting cycles (longer intervals) are one of several strategies to ensure adequate data for reliable assessments (Landon and Normand, 2008). Alternatively, in such cases, frequent updates can incorporate a “rolling average” that combines data from preceding periods with data from the most recent period to increase the number of observations; complex versions give greater weight to more recent performance data (Shaller and Kanouse, 2014; Friedberg and Damberg, 2011).
Additional factors that may contribute to the effectiveness of physician feedback reports but are based on weaker or less direct evidence, as well as tips from experienced users, include:
- Data include patient-level identifiers, which allow physicians to access detailed information about their performance at the patient level. This supports physicians’ trust in the data, i.e., assuming drill downs by physicians reveal accurate representations (Shaller and Kanouse, 2012). See also related discussion in Section 3-1.
- Data can be easily corrected by physicians. A correction feature or protocol that allows physicians to note where data appear to be in error not only provides a feedback loop for improving data quality, but also may facilitate physicians’ trust in the data (Shaller and Kanouse, 2012). See also related discussion in Section 3-1.
ii In 2006 the Ambulatory Quality Alliance developed a set of principles to guide the reporting of performance information to clinicians and hospitals; a major focus of these principles is on the issue of methods transparency (Shaller and Kanouse, 2012). Guidelines published in 2012 by the American Medical Association also emphasize the need for methods transparency as well as greater industrywide standardization of reporting formats and the need for physicians to have access to patient-level data (American Medical Association, 2012).
2-3. Optimizing user functionality
Feedback reports should strive to be “physician friendly.” Physicians’ needs should drive each of the design decisions that go into building a report.
Physician feedback reports are usually more effective when:
- Actual performance is displayed alongside a desirable comparator. In order to make sense out of their performance scores, physicians need some way to answer the question, “compared to what?”
The terms “comparator,” “benchmark,” and “performance goal” are often used interchangeably, but they are different. “Comparator” is an umbrella term. A frequently-used comparator is the national, regional, or practice average. Another type of comparator is a benchmark, which connotes a level of performance that is desirable. A national, regional, or practice average also could be a benchmark, if the report developer views the average as desirable. Usually, however, a benchmark is pegged to performance that is above average, although it depends on the metric and how well average clinicians are performing. As discussed later in this section, a benchmark becomes a performance goal when it is bounded by a time period for improvement.
The Achievable Benchmark of Care™ uses data to identify performance levels achieved by top-performing clinicians involved. Recipients of reports that include an Achievable Benchmark of Care™ experience greater improvement than recipients of reports that use average performance as the comparator (Kiefe, et al., 2001).
Benchmarks, such as 90th percentile, are derived at least in part from an analysis of performance data and may or may not also be informed by expert consensus on the appropriate level of performance. Regardless, the process used to develop a benchmark should be transparently described to physicians. Reports also should describe any analyses to examine the extent to which a given recipient’s patients are sicker or different, to address possible concerns about the fairness of the benchmark. For example, stratification of underlying data by type of provider, such as safety net, rural/urban, or multi-specialty, may help a clinician appreciate the extent to which his or her patients are sicker or different. It may be appropriate to develop separate benchmarks for clinicians with similar patient profiles.
It is uncertain whether displaying more than one type of benchmark is effective. What might be gained by enabling physicians to select the benchmark that is most meaningful to them could be offset by a physician’s bias in favor of the benchmark that portrays his or her performance most favorably. Use of multiple types of benchmarks for each performance measure also might introduce confusion, as might inconsistent use of benchmarks across performance measures (e.g., using one type of benchmark for measure A and another type for measure B).
Practically speaking, the availability of data needed to construct a particular benchmark, or the lack of it, may give one benchmark type an edge over another. Achievable Benchmarks of Care™ are not always feasible to construct. Other benchmarks, such as the 90th percentile for all safety net hospitals in a given region, need external performance information that may or may not be accessible to the report developer. Conversely, a formal process to elicit expert consensus to identify a target performance may be time consuming and costly.
- Goals are set for the target performance or behavior (Ivers, et al., 2012). Ideal goals are specific, measurable, achievable, relevant, and time bound. (Ivers, Sales, et al., 2014). They can be linked to a threshold measure of performance (e.g., 80 percent of patients to be screened) or can be expressed as a level of improvement (e.g., screening rates to improve by 10 percent).
Goals may be inherent in the selection of certain comparators (e.g., Achievable Benchmark of Care™), if they are further bounded by a time increment, but may be less likely in others (e.g., national average). For the latter, an explicit goal must be added. Evidence is mixed about whether physicians receiving feedback reports should self-set goals, be given a menu of possible goals from which to select, or simply be given a goal, which is embedded in the report.
- Reports are accompanied by a specific improvement plan that facilitates goal achievement (Ivers, Grimshaw, et al., 2014). If physicians are told that their scores are low but are not told how they can attempt to improve them, frustration may displace motivation to improve. An improvement plan with specific steps is needed to make the performance data actionable. In the case of low vaccination rates, for example, an improvement plan might include a list of the physician’s patients who did not get vaccinated to facilitate followup. Or in the case of low CAHPSiii communication scores, an improvement plan might include a review of elements of effective communication (Teleki, et al., 2006).
- The report format facilitates correct interpretation and highlights important patterns in performance (Vaiana and McGlynn, 2002). For example, does the report clearly communicate instances in which a physician’s performance is significantly different from the performance goal, or show whether and how performance is changing over time? Figure 7 is an example of a graphic display that facilitates a physician’s ability to see how his or her performance compares with the performance goal and assess whether he or she is improving over time.
iii CAHPS = Consumer Assessment of Healthcare Providers and Systems.
Figure 7: Sample graphic comparing actual performance with the goal, and monthly trending
Source: Shaller and Kanouse, 2014.
Many of the formatting tips gleaned from literature on best practices in designing public reports for consumers also apply to physician feedback reports. In both cases, the goal is to design a report that helps the user correctly identify and interpret key messages with minimal cognitive effort. Information about the measurement period (e.g., data year/month) needs to be explicit and visible. Details on measure specifications and scoring methodologies need to be easily accessible to those interested. On a Web display, use of a hover function can make detailed information readily available without adding clutter.
Other effective formatting conventions include linking data graphics to summary text messages to facilitate interpretation (Brehaut, et al., 2016) and using highlighting, boldfacing, text boxes, or sidebars to draw attention to key messages (McCormack, et al., 2001). To minimize confusion, formatting conventions are best used consistently within a particular report as well as from one report to the next.
Additional factors that may contribute to the effectiveness of physician feedback reports but are based on weaker or less direct evidence, as well as tips from experienced users, include:
- The organization of the report is logical and clear, and the landing page (for a Web-based report) or the table of contents (for a hard copy report) clearly explains how the report is organized and where to find specific elements (Vaiana and McGlynn, 2002).
- The report allows physicians to drill down to access patient-level data, which enables them to identify specific patients whose care falls short of the target performance or who are overdue for specific services or followup. The report also can generate a list of patients, such as those who need followup. Such a feature thereby supports care management as well as performance improvement functions (Shaller and Kanouse, 2012). Section 3-1 has a more detailed discussion of access to patient-level data.
- The report includes composite measures, which reduce the cognitive load to process the report’s contents and convey a summary of performance at a glance, in addition to related component measures, which provide more specific actionable information. For example, the report might include an overall score for the quality of diabetes care by primary care clinicians or an overall score for safety in hospitals, as well as related component measures, such as an A1c value less than 8 percent, and pressure ulcer rate, for primary care clinicians and hospitals, respectively.
- The report gives physicians flexibility to tailor output to their needs. Reports could be structured to enable physicians, for example, to select a subset of measures; review performance at the individual, team, or organization level; easily repackage information for a slide presentation; or convey information via email to themselves or others.
- Physicians are involved in report design. The greater their involvement, the greater the likelihood that their needs will be reflected in the design and the greater their sense of ownership and likelihood to use it. See also related discussion in sections 2-3 and 3-2. In addition, active engagement by physicians during the development stages helps premarket the report and set the stage for behavior change (Shaller and Kanouse, 2012).
- The report links to physicians’ certification requirements (e.g., the American Board of Medical Specialties’ Maintenance of Certification program). If physicians can satisfy their professional obligations by actively participating in a feedback reporting system, they may be more inclined to engage (Granatir contribution to Shaller and Kanouse, 2014).
2-4. Delivering to promote impact
Delivery of feedback reports to physicians may involve multistaged or single communication, a decision that is best driven by the particular context. Some contexts call for a multistaged delivery strategy. These contexts might include:
- A complex clinical behavior change is needed to improve performance.
- The clinical best practice is based on new evidence.
- The physician has never participated in a feedback reporting system.
- The changes needed to improve performance rest at the organizational level.
In a multistage strategy, for example, stage one might be a group presentation to introduce the feedback report, review improvement resources, and provide opportunities for discussion. Stage two might involve distributing individual feedback reports via hard copies or an online portal (Shaller and Kanouse, 2012). Stage three might feature a followup one-on-one meeting or group discussion to help interpret the report and discuss specific opportunities for performance improvement.
Other contexts (e.g., the required clinical behavior change is simple and straightforward or the group of physicians has a long history using feedback reports) might require only the distribution of individual feedback reports.
In addition to multistage strategies, some experts suggest the use of multifaceted interventions. For example, feedback reports may be more successful when paired with a reinforcing companion strategy, such as pay for performance or electronic reminders. In other cases, feedback reports may be effective on their own. Decisions about if and how to stage delivery or pair feedback reports with other strategies depend in part on the potential marginal impact of the proposed enhancement weighed against the added costs, especially if the desire is to deliver these strategies at scale (Ivers, Sales, et al., 2014).
Delivery of physician feedback reports is usually more effective when:
- Mode of delivery includes a verbal review by a trusted source, such as a supervisor or respected senior colleague. Conversely, feedback is less effective when it comes from an unknown source, such as a researcher, a payer, or a regulatory body. In-person delivery allows the discussion to be tailored to the needs of the receiving physician (Ivers, Sales, et al., 2014).
- Feedback is routinized and ongoing. Routine feedback conveys a sense of importance and facilitates a cycle of learning in which the physician can assess whether changes made to his or her practice since a prior report had a positive impact on performance. Routine feedback also may help focus attention on the measure and support sustained improvements in practice (Brehaut, et al., 2016). In contrast, one-time feedback reports are more easily dismissed by physicians.
- Feedback is anchored in an overarching quality improvement structure (Foy, et al., 2005; Van Der Veer, et al., 2010). Whether the structure is based on “plan, do, study, act” (PDSA), “Six Sigma,” or another quality improvement model, it is important for feedback reporting to have a home within this superstructure rather than competing with it for physicians’ attention. Doing so will support its credibility among physicians, increase the likelihood of sufficient funding with dedicated resources, and mitigate unnecessary duplication of measurement and reporting efforts.
Additional factors that may contribute to the effectiveness of physician feedback reports but are based on weaker or less direct evidence, as well as tips from experienced users, include:
- Adequate context is given so that physicians are poised to appreciate the purpose of the report and why they should be interested. It is important to be direct and straightforward, for example, about whether reported metrics will be part of incentive payments or whether the primary aim is to cut costs or improve quality (Vaiana and McGlynn, 2002). See also related discussion in Appendix 2 of messaging to consider when delivering your feedback report to clinicians.
- The report is delivered in a supportive, nonjudgmental way (Hysong, et al., 2006; Hayes and Ballard, 1995).