Based on early discussion with the TEP, a decision was made to replace the domain "client satisfaction" specified in the language of the DRA with the broader concept of "client experience." Unlike satisfaction, experience comprises both objective ("Do you know the name of your case manager") and subjective ("Is your case manager helpful?") assessments of program services and supports. Furthermore, client experience data capture a broader array of outcomes and are not influenced solely by an individual's personal metric.
The subjectivity of satisfaction measures makes them challenging for individuals with cognitive disabilities and precludes (or may preclude) data collection from proxy sources. While, by definition, client experience measures are derived from client-reported data, arguments can be made for using knowledgeable proxy respondents when clients are unable to respond for themselves. At the same time, proxy respondents can introduce potential biases that must be recognized.14,15 In our review of potential experience measures, we looked at measures that were calculated from direct client report as well as those that came from proxy respondents.
Innumerable dimensions of client experience can be measured related to the wide range of supports and services provided by Medicaid HCBS programs. Indeed, many State Medicaid programs try to capture at least some client feedback, whether through population-based surveys or selected case reviews. In our discussion with experts and stakeholders, there were eight constructs of client experience that emerged as clearly important, which are listed in Table 1. Many other potentially valid and meaningful experience constructs were recommended and considered, including privacy, control over daily activities, and ability to exercise individual rights. The face validity of this particular set of constructs is borne out by the fact that many of the home-grown, State-specific surveys we reviewed queried many of the same dimensions of experience.
We note three themes underlying the eight constructs composing the client experience domain of the measure scan, which reflect important aspects of HCBS quality from the perspective of project stakeholders. The first is client choice, captured in three global dimensions of program supports: providers, services, and housing. The ability to make choices is valued not only by project stakeholders, but also increasingly by the Medicaid program, which is placing greater emphasis on self-direction of program supports and person-centered planning processes. In many States, self-directed services have become an important part of the HCBS delivery system.
The second cross-cutting theme is satisfaction, represented by the constructs for satisfaction with residential setting and case management services. Global satisfaction is represented by the construct for perception of the quality of care. Client satisfaction is a time-honored metric of program quality, which allows program participants to determine quality according to their personal values and criteria. This notion of individually determined satisfaction was considered important, even given known biases in traditional satisfaction measures.16
Finally, we noted a theme of interpersonal respect and support. This can be assessed positively, as in the constructs of respectful treatment by direct service staff and the availability of staff/program support for resilience and recovery for those with serious mental illness.xvii The converse of positive and supportive interpersonal relationships is reflected in the remaining construct, client reports of abuse and neglect. Overall, these eight measures can be seen as representing a continuum from harmful and unacceptable experience (e.g., neglect and abuse), through respect and individual choice, culminating in individual satisfaction.
Gap Analysis
Table A.V.2b, in Appendix V, includes the appreciable number of candidate measures identified, for each of the eight experience constructs, that met the threshold criteria for feasibility and scientific soundness. Of the three DRA domains we reviewed, client experience was the one with the best coverage, in terms of tested and prevalent measures. Much work has gone into the development of consumer survey tools designed to solicit feedback on these constructs, such as the National Core Indicators Consumer Survey, the Participant Experience Surveys, and the surveys supporting the Mental Health Statistics Improvement Program.
The challenge going forward lies in part in comparing these instruments. While many measures assess experience with respectful treatment, for example, there are subtle but meaningful differences in how the construct of respect is operationalized. Some measures look at use of polite language, others at careful listening, and still others at cultural sensitivity. Deciding which aspects to prioritize for measurement will be a future task. Furthermore, the target populations for which these measures have been developed and tested also vary.
An additional challenge could include assessing (via psychometric testing) existing experience measures and their applicability/reliability to HCBS populations other than the ones for which the measures were designed, including populations who direct their own services. Given the evolving nature of AHRQ's long-term strategy for fulfilling the DRA mandate, this issue may or may not be relevant. A decision to adopt a true cross-disability approach could argue for additional testing of population-specific measures. At the same time, AHRQ may decide to pursue a limited set of core quality measures, supplemented by modules designed for specific populations.
An additional gap is the lack of standardization in the definitions of abuse, neglect, and exploitation in the client experience measures we identified, both in Table A.V.2b and in the compendium overall.xviii,17 Client reports of abuse and neglect will be influenced by perception, recall, and possible social desirability response set bias. Lack of a well-articulated, consistent definition of what constitutes abuse will complicate any comparisons across programs and States.
Finally, as noted above, in general, client experience measures are built on data collected directly from program participants. Many of the instruments we evaluated fell into this category. Others were designed specifically for populations that were not expected to report for themselves, such as children with severe disabilities or adults with severe dementia. Still others have the option of participant or proxy reporting. Decisions about who reports on experience and development and testing of tools for populations facing challenges to valid self-reports still remain.
A related issue is the mode of data collection. With the possible exception of employment status and school attendance, recent feedback from our TEP members underscores that measures for these constructs must come from consumer surveys, possibly in tandem with administrative data sources. However, there are a variety of options for collecting consumer data, including structured, in-person interviews; telephone interviews; online surveys; and mailed questionnaires. Most of the measures in Table A.V.2b were tested with a single mode of administration. Aligning the reporting constraints of disparate populations with collection costs and the data from testing will be an additional task in the next phase of measure development.