To some extent the distinctions among the following three strategies to test the accuracy of feedback reports and the extent to which they meet the needs of physicians are artificial, but the point is that testing reports is not a one-shot deal. Rather, testing can and should be performed throughout the reporting cycle from development through implementation.
- Section 3-1: Systems that enable physicians to correct patient-level data
- Section 3-2: Prerelease cognitive testing: giving physicians the opportunity to make feedback reports work for them
- Section 3-3: Postrelease monitoring and evaluation
When in doubt, test. When not in doubt, test more.” —David Kanouse, RAND |
3-1. Systems that enable physicians to correct patient-level data
Some online feedback reports allow physicians to drill down to patient-level data and note where data appear to be in error and need to be corrected. In addition to providing a feedback loop for improving data quality, the mere visible presence of a correction feature may enhance physicians’ trust in the report.
The degree of rigor in documentation to correct errors should be influenced by the amount of trust that exists (or must be built), the intended uses of the reports, and the costs of documentation. Corrections to reports that are used solely to support good patient care and quality improvement may require little or no supporting documentation.
Capacity to access patient-level data also enables physicians to identify specific patients whose care falls short of the target performance or are overdue for specific services or followup. Access to patient-level data is a key feature that differentiates feedback reports that are aimed primarily at performance assessment from those that also serve as tools for improving care management.
For physicians to take the steps needed for improvement, they need to know where to direct their efforts. Reports that include patient-level data are more actionable than those that do not, since the physician can drill down to patient-level data to identify, for example, gaps in services. Reporting guidelines published by the American Medical Association emphasize the need for physicians to have access to patient-level data (AMA, 2012).
Some report developers present information (e.g., a list of specific patients due for care) in an appendix, companion report, or registry to provide the physician with guidance on specific ways to improve performance (e.g., contacting patients on the list who are due for a mammogram).
Followup action by physicians is further facilitated by enabling patient data to be downloaded to Excel spreadsheets
If patient-level information is shared across business entities, a Business Associate Agreement (BAA) is required in order to comply with the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule, which protects the privacy of individually identifiable health information. The BAA provides written safeguards that such protected information will be used only for authorized purposes, including quality assessment and improvement activities, and will not be disclosed in any way that would violate the Privacy Rule (U.S. DHHS, 2003).
*This section was excerpted and adapted from Shaller and Kanouse, 2012.
3-2. Prerelease testing: Giving physicians the opportunity to make feedback reports work for them
Designing and developing an effective physician feedback report requires working with members of the intended user audience (e.g., physicians, care teams) to test how understandable and usable it is before the report goes live (Shaller and Kanouse, 2012). A variety of techniques can be used to test reports before a report’s release, including cognitive testing, semistructured interviews, and focus groups. Each requires the development of a guide to ensure key topics are addressed, and each calls for a note taker tasked with chronicling user feedback.
Technique #1: Cognitive testing
Cognitive testing is conducted through one-on-one, in-person meetings during which the interviewer shadows and observes a representative of the report’s intended user audience, such as a physician, as he or she reviews the report. It is a particularly useful and efficient technique for gaining critical insights about the extent to which the prerelease version of the report is understandable and usable. See Text Box 4 for sample questions that cognitive testing can answer.
Text Box 4. Illustrative questions that can be answered with cognitive testing
|
In cognitive testing, physicians are asked to tell, in their own words, what they learned from different components and features of the report and how the report made them feel. Physicians are not being tested, but rather, the report is being tested. Cognitive testing can be used throughout the development process. Some developers find it useful to conduct two rounds of testing, either on the whole report or on particularly problematic sections, to ensure subsequent “fixes” made in response to initial testing adequately address deficiencies (Sofaer contribution to AHRQ working paper on public reporting, 2012).
Technique #2: Semistructured interviews
Semistructured interviews are conducted one-on-one or in very small groups of no more than three physicians (AHRQ, 2015). They are distinguished from cognitive interviews in that they do not rely on observation of the physician representative using the tool, in this case the feedback report.
Both cognitive and semistructured interviews are particularly useful for collecting information that is not influenced by the opinions of others in a group discussion. They also are useful for collecting information from an individual or small group of physicians that is not influenced by the presence of supervisors or managers.
Technique #3: Focus groups
A focus group is a small group discussion among representatives of the target audience, in this case physicians, that is led by a moderator. It allows for group members to respond to comments made by other group members, and can yield innovative ideas for redesigning the report to better meet the physicians’ needs. An important part of the moderator’s responsibility is to ensure that each person has an opportunity to speak (AHRQ, 2015).
Testing a report before release, be it via cognitive or semistructured interviews or focus groups, has a secondary benefit of jump-starting the phase of building physician awareness about the feedback report. A decision to forgo testing before a report goes live could be a big mistake, as can be seen in the example below. See Text Box 5.
Text Box 5. Case example of Cincinnati Health Collaborative
Although the Cincinnati Health Collaborative successfully engaged physician leaders in many aspects of their measurement and reporting activities, the Collaborative did not conduct any initial testing or review of their provider feedback report with physicians. Not doing so prevented the Collaborative from discovering what report features were most valuable to physicians, such as the availability of patient-level data to identify patients in need of followup screening tests. Absence of testing also contributed to a lack of awareness and use of the Collaborative’s clinician feedback report. This situation enabled the perception to grow that that their report, accessible through a secure data portal, added little value to the performance feedback information already available to physicians (Shaller and Kanouse, 2012). |
3-3. Postrelease monitoring and evaluation
Like all improvement strategies, physician feedback reports can be enhanced through periodic evaluation. Distinct from prerelease testing, which seeks to determine before the report goes live if the presentation of the information in the draft effectively supports goals, evaluation in the context of this discussion is to assess the report’s actual impact on physician behavior. For the subset of physician feedback reports that also serve as care management tools, discussed in Section 1-1, evaluation also can include the assessment of their impact on improving care management.
While physicians’ responses to informal questions, such as, “Have you used the report?” and “How has it influenced what you do in the treatment rooms?,” can be instructive, more systematic methods for monitoring and evaluating reports include:
- Online tracking tools for Web-based reports only (summarized in Text Box 6).
- User surveys.
- Analysis of performance data (Shaller and Kanouse, 2012).
Text Box 6. Complementary monitoring tools for Web-based reports
|
Ultimately the success of physician feedback reporting depends on actions physicians take based on the feedback. The “holy grail” measure of success is not whether a report has been read and understood, but whether it has contributed to better care (Shaller and Kanouse, 2014). This is challenging because so many other factors simultaneously affect care. Unless the report is part of a controlled scientific study, it is difficult to isolate what, if any, specific impact can be attributed solely to a reporting effort, or more narrowly, to one or more specific features of a reporting effort.