Conducting Cognitive Testing for Health Care Quality Reports
One of the most useful techniques available to report sponsors who want to test materials with their audience is cognitive testing. Cognitive testing is an interview method for investigating the thought processes and reactions that people have as they read information, explore options, and make decisions. The method was originally developed as a tool for selecting and refining survey questions and response options. It was then adapted to test health-related report materials in general and quality reports in particular.
Cognitive testing of quality reports involves conducting one-on-one interviews with a small sample of people who are representative of your report’s intended audience. These one-on-one interviews, which typically last about 1 to 1½ hours, give people the opportunity to discuss specific elements of your report, including tables, figures, and the language used to explain technical concepts.
Why Is Cognitive Testing of a Quality Report Useful?
Cognitive testing is essential to producing an effective report because it provides important insights into how people interpret the language and graphics in your report and how they connect what they see in the report to their own experiences. Used properly, this method can help determine the extent to which the information in your report is:
- Readable by your intended audience.
- Perceived to be relevant and salient.
- Understood in a way that is consistent with what you intended to communicate.
- Presented in a way that enables people to easily find what they are looking for.
When done skillfully, cognitive testing can help identify problems that even seasoned report developers cannot anticipate. While you may know a lot about the issues that your report deals with, you cannot rely solely on your own judgment, that of your colleagues, or even extensive experience to determine whether your report contains clear and useful information. Details that seem trivial to you may be important to your audience, or things that you consider important may seem inconsequential or even distracting to your audience.
For example, report sponsors often think they need to provide detail about the methods used to generate the performance scores; however, lay audiences often find information about statistical methods confusing and off-putting. Testing what you plan to present about methods can help strike the right balance for the audience you are trying to reach. If you do not solicit direct feedback from your intended audience, you are simply guessing about what will work.
Cognitive testing enables a report developer to:
- Learn whether readers can understand the information that is presented and appropriately interpret its implications. The questions should be designed to reveal differences between what you, the sponsor, think the materials mean and how different people from the intended audience actually interpret them.
- Get an early indication of whether the intended audience will be able to use the information to make a decision.
- Identify what may be missing from your report. As they review the report, participants can specify whether they would like any additional information or require any to clarify the quality measures and scores.
- Explore people’s reactions to different approaches you are considering. For example, you could provide people with two versions of a chart to see which one is easier to understand, which they prefer, and why.
- Try out possible solutions to problems. For example, if you are considering different formats for a data display, you can test out several versions to see which is most clear to the intended audience. You can use respondents' insights and reactions to refine your messages and materials so that they communicate more clearly and effectively.
How Is Cognitive Testing Conducted for a Quality Report?
In cognitive testing, the interviewer shows respondents a draft of a quality report—in part or whole—that contains either real (with actual names masked) or fictional data that is made to look realistic. As the respondents look through the report, the interviewer prompts them to share their thoughts and impressions.
As with any acquired skill, experience matters, so it is worthwhile to consider hiring someone who is well-versed in doing these types of interviews. Whether you are conducting the cognitive interviews yourself or hiring someone else to do them, here are some important points to help ensure the success of the testing.
Type of Participants
Cognitive testing should be conducted on a sample of individuals who resemble the target audience of your report. For example, if your report is targeted at people who need to select a new primary care provider (PCP), then the participants for your cognitive interviews should be people who have recently selected a new PCP or who may need to do so in the near future. On the other hand, if your report is targeted at a broader population, your cognitive testing participants should represent that breadth.
Number of Participants
You can learn a lot about your report by conducting cognitive interviews with relatively few people. Often one round of testing with as few as 5 to 10 individuals is sufficient to provide useful feedback. The number of interviews needed in any particular round often reveals itself: Once you start hearing the same feedback from participants (i.e., reach a saturation point), it is likely time to stop, regroup, and make changes.
One or more additional rounds of testing with another small group of individuals can also be helpful to explore remaining issues and help you test the changes you made in response to the first round of testing.
Setting the Tone
When starting a cognitive testing session, it is important to put respondents at ease so they feel comfortable sharing their honest opinions. The interviewer should remind participants that there are no correct or incorrect answers to the interview questions and that the purpose of the testing is to learn their opinions and solicit their advice. As an incentive, and to acknowledge the value of their contribution, participants are typically offered a payment of some sort (e.g., cash or gift card).
Collecting the Data
Interviews are usually conducted one-on-one, in-person, and in a professional (office) setting where privacy can be maintained. They typically last about 1 hour.
A notetaker should be present so that the interviewer can focus exclusively on conducting the interview. Besides recording responses, it is useful if the notetaker observes and records respondents' nonverbal behaviors that may offer clues about how they perceive the materials being tested. To assure that important data are not missed, it is also a good idea to tape the interview, but only with the participant's approval.
To collect feedback, interviewers typically use three techniques: observation, listening as respondents "think out loud," and questions. These techniques can be used to test one report format or to evaluate a variety of alternative formats within the same round of testing.
Observation: The interviewer watches the respondents to see what they look at and how they navigate the document.
- Where do they begin reading?
- Where do they go next?
- How long do they spend on each topic?
Experienced interviewers recommend giving respondents a colored highlighter to mark any confusing words, sentences, or sections. After they go through all of the materials, the interviewer can probe further into the things they highlighted.
Listening as Respondent "Thinks Out Loud:" The interviewer asks respondents to share every thought and opinion as they go through the document, a process known as "thinking out loud." Respondents are asked to:
- React to specific sections or pieces of the report (such as charts).
- Tell when they find something confusing.
- Explain what they've seen in their own words.
- Point out unfamiliar terms or concepts.
If respondents seem to be having trouble thinking out loud, seem confused, or pause, interviewers sometimes ask "Tell me what you are thinking" or "What are you thinking right now?" as a prompt. While using this technique, it is often useful to provide nonverbal reinforcement to let the interviewee know that you are listening, such as nodding your head or saying "mmmm hmmm," "okay," or "I see."
Asking Questions: The interviewer asks direct questions to learn how respondents interpret what they're looking at. For example, the interviewer might ask a question such as "Given the data in this graph (or in the report overall), which provider would you say performed best? Which would you choose for yourself and why?"
It is important for questions to be open-ended and not leading. It is easy to unintentionally ask a question that may constrain and bias answers. For example, the question "What do you think of this legend?" presumes the participant noticed it and tells them what it is (or at least what it's called), making it less likely that they'll tell you they didn't see it or don’t know what it is.
Instead, the interviewer could ask "Is there anything on this page (or in this report) that helped you understand the information?" If the respondent mentions the legend, then it was noticeable. If not, it might be necessary to make the legend more prominent. Once the interviewer determines whether the respondent noticed the legend, open-ended questions can probe other topics like legend format and utility.
Checking on Comprehension
While it’s tempting to gauge comprehension by asking direct questions about how hard or easy it was to understand the report, this approach is not helpful. People often think they understand a display or a sentence when they’ve actually interpreted it incorrectly.
The best way to check on comprehension is to ask people to interpret the information in their own words. That way, you'll be able to judge for yourself whether they've gotten it right, and if they haven't, you’ll have a good idea of how the misunderstanding came about.
In most situations, you should not tell participants they gave an incorrect answer. Doing so can change the dynamic so that the participant feels as if the interview is a performance testing situation and begins focusing on what the interviewer thinks rather than reacting naturally to the report.
Summarizing the Results
It is useful to organize the summary of a cognitive interview by listing the various elements of the report being tested, including fairly macro level features like "page 2" as well as specific elements like an icon or a data label.
- For each key element of the report, document what the respondent did with it (e.g., noticed it or not, decided not to read it, or read only part way through it), and what the respondent said about it, including how she interpreted it and whether she found it helpful.
- Document the respondent's comments about the relationships among report elements, such as how the respondent thinks a newly encountered data display is related to one seen earlier.
Knowing When To Stop Testing
Cognitive testing is an iterative process. Report designers often conduct additional rounds of testing to follow up on problems identified in earlier rounds and to evaluate solutions to those problems.
Multiple rounds of testing can be helpful because changes to fix one type of problem sometimes inadvertently cause new problems. Of course, the cycle of testing cannot go on indefinitely. Report designers must use their best judgment and be cognizant of their budget constraints in determining when it is time to stop testing and finalize the report.
Conducting Usability Testing of a Health Care Quality Report
Usability testing is another type of one-on-one testing that is useful during the report development process. The process is similar, but the purpose of usability testing is different in that the focus is on learning whether your audience can easily and effectively use the information in your report to make a decision. The entire interview is devoted to observing and assessing how participants perform the task for which the report is intended. It is often necessary to conduct usability testing in addition to cognitive testing, as a report that is easy to understand is not necessarily easy to use.
Usability testing goes beyond asking audience members how they would use the information or asking them questions to make sure they understand the information. It is designed to determine how well people can synthesize information across the entire report and apply that information to their own situation. For example, to test the usability of a report on doctor quality, you could provide the report to participants and ask them to imagine that they are choosing a doctor for themselves. With this task in mind, participants are likely to be in a better position to evaluate the usefulness of the report and to say what they found lacking.
Another important aspect of usability testing is assessing the navigability of your report. You can often investigate navigational issues simply by asking people where they would find a particular piece of information and observing how much difficulty they have in doing so.
Recommended Reading on Cognitive Testing
Hoy EW, Kenney E, Talavera AC. Engaging consumers in designing a guide to Medi-Cal Managed Care Quality. Oakland, CA: California HealthCare Foundation; 2004. This resource includes an Interview guide in Appendix C.
Sofaer S. Qualitative research methods: what are they and why use them? Special Supplement on Qualitative Methods in Health Services Research: Part II. Health Serv Res 1999 Dec;34(5):1101-18.
Sofaer S. Qualitative Research Methods. Int J Qual Health Care 2002;14(4):329-36.
Willis G. Cognitive Interviewing: A Tool for Improving Questionnaire Design. Sage Publications, 2005.