Section 5: Determining Where To Focus Efforts To Improve Patient Experience (Page 1 of 2)
Contents
5.A. Analyze CAHPS Survey Results
5.B. Analyze Other Sources of Information for Related Information
On Page 2 of 2:
5.C. Evaluate the Process of Care Delivery
5.D. Gather Input from Stakeholders
References
Download Section 5: Determining Where To Focus Efforts To Improve Patient Experience (PDF, 1.2 MB)
To identify opportunities to improve patient experience and determine where to direct your resources, you can start by reviewing your CAHPS survey results in combination with other forms of patient feedback, both quantitative and qualitative. You can then use a variety of qualitative methods to confirm and gather further insights into specific problems, identify possible solutions, and monitor progress. Because some qualitative methods are easier and less expensive to implement than surveys, they can be used more frequently to provide ongoing feedback valuable to clinicians, administrators, and staff.
This section covers four ways to figure out which aspects of patient experience could and should be improved:
- Analyze CAHPS survey results to understand your organization’s performance.
- Analyze other sources of data for related information.
- Evaluate the process of care delivery.
- Gather input from stakeholders.
Once you have identified the aspects of patient experience for which you want to develop improvement activities, you will have to decide where exactly to focus your resources. Considerations include how widespread the problem is, how different your score is from others (i.e., the size of the opportunity to improve), the nature of current improvement activities, and the importance of the issue based on other forms of patient feedback.
5.A. Analyze CAHPS Survey Results
Once you have results from a CAHPS survey in hand, you can start by seeing where your scores appear low relative to other composite measures in the survey. You can then conduct different kinds of analyses to identify your organization's relative strengths and weaknesses:
- Compare your CAHPS scores to benchmarks.
- Compare your current CAHPS scores to past performance.
- Assess which aspects of performance are most relevant to your members or patients.
Each kind of analysis provides a different perspective on performance. In some cases, you may be able to obtain sufficient information from using just one or two of these methods.
5.A.1. Compare Your CAHPS Scores to Benchmarks
One way to get the information you need to identify specific problem areas, formulate an improvement plan, and select appropriate strategies is to compare your performance to others. To do that, you need to identify benchmarks or comparative data that are appropriate and relevant for your organization. A benchmark could be a regional or national average, the average score for the same type of organization, or a "stretch goal," such as the score achieved by the top performers. Your benchmark choices should be guided by your business strategy and improvement goals.
Major sources of comparative benchmarks include:
- CAHPS Database (for both the Clinician & Group Survey and the Health Plan Survey (for Medicaid, CHIP, and Medicare plans)
- National Committee for Quality Assurance's (NCQA) Quality Compass (Health Plan Survey)
- Centers for Medicare & Medicaid Services (Health Plan Survey for Medicare only)
The CAHPS Database is a voluntary initiative sponsored by the Agency for Healthcare Research and Quality (AHRQ) that enables survey users to compare their own results to relevant benchmarks such as overall and regional averages. In addition to a public online reporting system that presents summary-level de-identified comparative data, survey users that submit data to the CAHPS Database have access to a Private Feedback Report in Excel.
The CAHPS Database presents several views of comparison data, including percentiles, top box scores, and full frequency distributions. Using the online reporting system, a practice site submitting its CG-CAHPS survey results to the CAHPS Database can compare its scores to selected benchmarks for each composite and item.
Other sources include:
- Your survey vendor. Many vendors offer access to comparison norms for their clients.
- Community-level data. Depending on the nature of quality measurement activities in your State or region, you may have access to benchmarks specifically for local providers. For example, several multi-stakeholder collaborative organizations gather and report comparative CAHPS results at the clinic site or individual physician level. (Learn about regional health improvement collaboratives.)
When comparing your results to a benchmark, keep in mind that the benchmark provides only a relative comparison. Even though your results may be better than the average score, for example, you may believe there is room for improvement in a particular area in an absolute sense. In fact, there may be some aspects of patient experience measured by the CAHPS survey that even the highest scoring sites could improve on.
There are many ways to analyze your CAHPS results in comparison to benchmarks or other reference points. There is no "right" approach, and the selection of methods for data scoring and presentation will depend on both the benchmarks you choose to use and the level of detail needed by your audience. Following are several examples of different approaches for comparing CAHPS survey results to benchmarks. These examples draw on survey results from the Clinician & Group Survey but apply as well to the Health Plan Survey.
The CAHPS Analysis Program, often referred to as the CAHPS Macro, uses the survey results to calculate two types of scores. First, it calculates the percent of respondents in each of the response categories for a CAHPS composite or question. Those percentages are called proportional scores. The proportional score for the best possible response option (e.g., "always" or "yes, definitely") is referred to as a "top box" score.
The CAHPS macro then calculates a mean for the CAHPS composite or question. To do that, the response scales are first converted to numerical values. For example, the 4-point response scale of "always," "usually," "sometimes," and "never" is translated into the values of 4, 3, 2, and 1, respectively. The mean value is then calculated across the four numerical values for each question. The mean score for a composite is computed by taking the average across the mean scores for the items that are included in the composite measure.
5.A.1.a. Comparing Mean Scores
The simplest place to start is to compare the organization's mean scores for the CG-CAHPS composite and rating measures with the average mean score for comparable entities (e.g., other physician practices, medical groups, or health plans), as illustrated in Figure 5-1. As can be seen in this example, a practice site's mean score for the Provider Communication composite measure (3.64) is significantly higher than the mean for the medical group (3.44), yet its mean score for the Provider Rating (8.21) is significantly lower than the mean for the group (8.74). The site is not significantly different from the group on the other two composites. The horizontal lines for each composite in the "Comparison to the Group Mean" column show the minimum site score and the maximum site score within that group.
Figure 5-1. Comparison of Mean Scores for a Practice Site and a Medical Group
For the purposes of comparing composite measures and rating items that have different response categories, Figure 5-2 shows the same data with the mean scores normalized to a 0-100 scale. (Learn about normalizing scores in the box below.)
Figure 5-2. Comparison of Practice Site Normalized Mean Scores to Group Normalized Mean Scores
Normalizing is a way to transform all scores to the same scale, typically 0 to 100. It is done to ease comparison across items and composites that use different response scales.
To transform the scores, one would first transform the response values at the respondent level from 0-100 using the following formula:
Normalized Score = 100*(Respondent's selected response value – Minimum response value on scale) /
(Maximum response value – Minimum response value)
For example, the responses on a four-point scale would be normalized as follows:
Response Option | Normalized Response |
---|---|
1 | 0.00 |
2 | 33.33 |
3 | 66.67 |
4 | 100.00 |
5.A.1.b. Comparing "Top Box" Scores to Benchmarks
Another option is to compare the percent of responses in the best possible category for a survey question or composite measure (i.e., the "top box" score) to one or more benchmarks. The CAHPS Database uses this method in one of the displays included in its online reporting system.
Table 5-1 illustrates a comparison of scores for a sample medical group on the CAHPS Database Submitter's Site for the Access composite measure ("Getting Timely Appointments, Care, and Information") and its individual items in the Clinician & Group Survey 2.0. The medical group scores (in the shaded column) are compared to the overall average of scores in the CAHPS Database and to selected percentile scores. (See the box below for an explanation of percentile scores.)
Table 5-1. Comparison of Sample Medical Group Top Box Scores to the Mean Top Box Score (CAHPS Database Overall) and Selected National Percentiles
Composite/Item | Selected Group/Site | CAHPS DB Overall | 90th Percentile | 75th Percentile | 50th Percentile | 25th Percentile |
---|---|---|---|---|---|---|
Getting Timely Appointments, Care and Information | 58% | 59% | 73% | 66% | 59% | 52% |
Got appointment for urgent care as soon as needed | 64% | 64% | 81% | 74% | 66% | 58% |
Got appointment for check-up or routine care as soon as needed | 69% | 68% | 83% | 77% | 71% | 63% |
Got answer to phone question during regular office hours on same day | 53% | 59% | 78% | 69% | 60% | 52% |
Got answer to phone question after hours as soon as needed | 63% | 59% | 80% | 68% | 58% | 48% |
Wait time to be seen within 15 minutes of appointment time | 41% | 43% | 61% | 52% | 43% | 33% |
Source: CAHPS Database Submitter's Site for the CAHPS Clinician & Group Survey 2.0
Percentiles provide useful information about the distribution of scores across all of the organizations (e.g., practice sites or health plans) included in a benchmark. To calculate percentile scores, the scores for all participating organizations are ranked in order from low to high. The percentile (e.g., 90th percentile, 25th percentile) indicates the percentage of organizations that scored at or below a particular survey score. For example, the score shown for the 75th percentile is the score where 75 percent of the sites or plans scored the same or lower and 25 percent scored higher.
To compare your scores, look for the highest percentile where your score exceeds the percentile score. For example, in Table 5-1, the group’s top box score for the question, "Got answer to phone question after hours" is 63%. This score is higher than the 50th percentile score of 58%, which means that this group scored higher than 50 percent of the groups in the CAHPS Database.
By comparing your organization’s top box score for a composite measure and its items to the mean top box score (CAHPS DB Overall) and the percentile scores, you can determine where your organization can improve. For example, the sample comparison in Table 5-1 shows that the medical group's scores for the Access composite measure and its items are roughly in line with the mean score, with the exception of the item, "Got answer to phone question during regular office hours on same day." The medical group's top box score of 53% for this question is close to the national 25th percentile score of 52%, suggesting the need to investigate factors that may be influencing this lower score.
One way to identify what is driving a relatively low score for a large organization is to look at the scores for its components. By calculating benchmark scores for a large organization, such as a health plan, health system, or medical group, you can see how entities within the organization compare to each other. For example, if the medical group in the example above submitted data to the CAHPS Database for several practice sites, the group and its practices could see a display of bar charts showing the full distribution of scores for each practice site. As illustrated in Figure 5-3, among the sample medical group's three practice sites, Practice Site A has the lowest top box score for the question related to getting an answer to a phone question during regular office hours on the same day. In addition, the down arrow indicates that the mean score for Practice Site A is below the average for all practice sites included in the CAHPS Database, calculated at the 0.05 significance level. This type of comparison would allow the medical group to pinpoint improvement opportunities at particular practice sites.
Figure 5-3. Comparison of Practice Site Scores to Medical Group Scores
For more information on using the CAHPS Database to compare CAHPS results for both health plan and medical groups, explore the CAHPS Database Online Reporting System
For more information on the pros and cons of different scoring and comparison methods for CG-CAHPS Survey results, read:
- Aggregating and Analyzing CAHPS Clinician & Group Survey Results: A Decision Guide.
- Developing a Public Report for the CAHPS Clinician & Group Survey: A Decision Guide
5.A.2. Compare Your Current CAHPS Scores to Past Performance
If you have collected CAHPS survey results more than once, another useful way to identify opportunities for improvement is to look at past performance. Comparing your current scores to previous scores can be valuable for:
- Detecting areas where your performance is improving, declining, or holding steady.
- Increasing your confidence that the scores reveal a true picture of performance and are not just a snapshot of performance at a single point in time.
Figures 5-4 and 5-5 present two sample displays to examine CAHPS data over time. In Figure 5-4, bar graphs show trends in "top box" scores from 2010-2014 for the four Health Plan Survey composite measures and two rating items.
Figure 5-4. Bar Graph Example for Trends in Top Box Scores for the Health Plan Survey, 2010-2014
Figure 5-5 shows the same data using line charts to plot the trends over time. With the line charts, it was necessary to alter the y-axis so that it starts at 50% and goes to 100%. Because most of the scores clustered within 30 percentage points of each other, this change to the axis makes it easier to see the differences in scores across the measures.
Figure 5-5. Line Graph Example for Trends in Top Box Scores for the Health Plan Survey, 2010-2014
5.A.3. Assess Which Aspects of Patient Experience Are Most Important to Your Members or Patients
Another method you can use to help determine what specific issues to focus on for improvement involves identifying the factors that are most important to members or patients. This analysis of the "importance" of topics in the CAHPS survey—sometimes referred to as a "key driver" analysis—requires an assessment of how strongly a score for a particular question or composite measure is associated with patients' or enrollees' overall rating of their health plan or medical practice. This type of analysis can be conducted with data from multiple groups, sites, or plans.
The statistic commonly used to assess such associations is called a correlation coefficient, which can range from –1.0 to +1.0 (see box below for information about interpreting this statistic). There are several methods for calculating correlations; the method that is recommended for CAHPS scores is the Spearman correlation, but other methods may also be useful.
- If the correlation coefficient is between zero and 1, the overall rating (e.g., how would you rate your care?) has a positive relationship with the score for a question (e.g., how often did your personal doctor explain things in a way that was easy to understand?) or composite measure (e.g., Doctor Communication). This means that the rating increases as the score increases. The higher the value of the coefficient, the stronger the relationship.
- If the correlation coefficient is 1.0, the rating and the question or composite measure are perfectly related, i.e., measuring the same concept.
- If the correlation coefficient is zero, the rating and the question or composite measure are independent, i.e., not related.
- If the correlation coefficient is between 0 and -1, the rating is inversely related to the question or composite measure, which means that the rating decreases when the score increases. This is unusual in a CAHPS survey unless the response options are reversed, in that "never" is the most desired response.
The following examples illustrate the results of a key driver analysis for the Health Plan Survey and the Clinician & Group Survey. These correlations do not necessarily apply to your implementation of a CAHPS survey; it is important to analyze your own data for such correlations because they can be different for each sample.
5.A.3.a. Correlation Coefficients for the CAHPS Health Plan Survey
Table 5-2 below presents Spearman correlations between the Health Plan Survey composite measures and the overall ratings of doctor, care, plan, and specialist. As has been found in previous analyses, the strongest relationship was between the Doctor Communication composite and the Doctor Rating.
Table 5-2. Correlations between top box scores for composite measures and overall ratings in the Health Plan Survey
Composite measure | Doctor rating | Care rating | Plan rating | Specialist rating |
---|---|---|---|---|
Getting needed care | 0.53 | 0.68 | 0.57 | 0.43 |
Getting care quickly | 0.48 | 0.61 | 0.48 | 0.31 |
How well doctors communicate | 0.69 | 0.67 | 0.44 | 0.39 |
Customer service | 0.28 | 0.49 | 0.49 | 0.20 |
Note: All correlations are statistically significant (p < .001). Data for analyses came 122 health plans that administered the Health Plan Adult Medicaid Survey.
5.A.3.b. Correlation Coefficients for the CAHPS Clinician & Group Survey
Table 5-3 presents Spearman correlations between the composite measures from the Clinician & Group Survey 2.0 with supplemental Patient-Centered Medical Home (PCMH) items and the overall rating of the provider. Consistent with the example of the Health Plan Survey above, the data indicate a very strong association between the Provider Communication composite and the Provider Rating and strong but slightly smaller relationships between Access to Care and Office Staff scores and the Provider Rating. The correlations for the three PCMH supplemental composites are much lower than those for the core composites.
Table 5-3. Correlations between top box composite scores and the provider rating in the Clinician & Group Survey
Composite measure | Provider rating |
---|---|
Getting timely appointments, care, and information | 0.61 |
How well doctors communicate with patients | 0.87 |
Office Staff: Helpful, courteous, and respectful office staff | 0.66 |
Talking with you about taking care of your own health (PCMH) | 0.38 |
Attention to your mental or emotional health (PCMH) | 0.17 |
Talking about medication decisions (PCMH) | 0.52 |
Note: All correlations are statistically significant (p < .01). Data for analyses came from 714 practice sites that administered the Clinician & Group PCMH Survey 2.0.
5.A.3.c. Creating a Priority Matrix
One very useful way to hone in on areas for improvement is to plot a "priority matrix" that graphically displays relative performance on the composite measures along with the relative "importance" of the composite measure as it relates to an overall rating of care.
Using an example based on the CG-CAHPS survey with PCMH supplemental items (shown in Figure 5-6), a priority matrix plots the following two variables:
Relative Performance on the Y-Axis. On the Y-axis, the chart displays where the practice site's scores stand in relation to all other practices included in the survey. That is, scores below the "50" line denote measures for which the practice's performance is below the 50th percentile, and those above the 50 line denote measures for which the practice's performance is above the 50th percentile.
Relative Importance on the X-Axis. On the X-axis, the chart shows the relationship between each survey measure and patients’ overall rating of the provider, as measured by the correlation coefficient discussed above. The further to the right a measure is on the chart, the more strongly it is associated with the provider rating. The vertical line at 0.6 illustrates one way to differentiate higher and lower correlations, as correlations at or above 0.6 signify a strong association.
Combining these two pieces of information into a matrix, as shown in Figure 5-6, can help you identify priority areas for improvement in the practice. For example, measures in the bottom right quadrant reflect those that should probably be the highest priorities for improvement in that they are both important to patients (as revealed by high correlations with patients' rating of the provider) and areas in which the practice performed below the 50th percentile. The other quadrants convey similar information about how the practice performed on each aspect of care and the relative importance of this area to patients. Note that Figure 5-6 is an illustrative example; where you choose to place the lines to form the quadrants should be based on your own goals and priorities.
These kinds of analyses and graphical representations of relationships are not difficult to do, but they do require time and access to analytical support. Many survey vendors are capable of providing these services as part of the CAHPS data collection and reporting process.
Figure 5-6. Priority Matrix for a Sample Practice Site
5.B. Analyze Other Sources of Information for Related Information
Once you have compared your CAHPS scores to your previous scores and/or relevant benchmarks (e.g., national, regional, or other comparison group of interest), you may want to review related information to confirm your findings and identify steps you could take to improve patient experience. Sources of information that could be helpful for this purpose include complaints and compliments, patients' comments, and administrative data.
Health plans and providers typically have access to or can easily gather various types of administrative data that you can "mine" to determine which performance issues may be affecting your CAHPS scores. Examples of sources of administrative data include:
- Telephone logs
- Employee work hours
- Visit appointment records
The types of data you choose to use for further analysis will depend on the issues you identified when examining your CAHPS results. For example, if you are interested in improving patients' experiences in getting appointments when needed, you could:
- Examine visit appointment records to assess missed appointments.
- Analyze telephone logs to assess how many dropped calls or failed appointment queries occurred.
- Analyze visit appointment records to determine the amount of time between scheduling an appointment and the actual appointment date.
- Search your complaint records and tabulate the number of complaints received about appointment problems.