As noted in Section II, we relied on stakeholder, primarily TEP member, input to identify important areas of measurement for assessing HCBS program quality. Sections III through V reflect analysis of the candidate measures that align with the consensus constructs that emerged through this process and are listed in Table 1. However, this list should not be construed as definitive or complete. There are several other valid areas of assessment that have not been addressed above but are potentially worthy of consideration as AHRQ moves forward in fulfilling the DRA mandate.
One additional area of assessment mentioned by TEP members were individual rights exercised by HCBS program participants. These can include rights to information, rights to complain about services, and the right to determine one's own daily schedule. Some of this construct is captured in the variables examining participant's ability to make decisions about providers and services, but explicit "rights" measures were not systematically evaluated. As noted earlier, there are also program performance measure constructs, particularly at the systems level, that were not included in that domain. One potentially important process variable is availability of qualified and/or quality service providers and programs' adherence to provider credentialing standards.
Early in the process of this project, the TEP advised the project team to move away from broader measures of States' long-term care systems, including measures of system rebalancing (who is served where) and access to the system, such as measures of waiting lists. Currently, CMS is developing measures of system rebalancing, through the National Balancing Indicators Project. In the future, examining correlations between success in rebalancing and HCBS program quality may be an important area of inquiry, although outside the scope of the DRA mandate.
Many measures have been developed for non-HCBS settings, primarily nursing homes and hospitals, that may be adapted to these services and settings. However, the threshold criteria used in the final evaluation process dropped measures developed solely for institutional settings. These included those calculated from the Minimum Data Set (MDS)xxi for nursing homes or the Healthcare Cost and Utilization Project (HCUP) data for hospitals.xxii They also included many of the "never events," which are patient safety events that pose serious harm to patients but can be considered entirely preventable, as defined by the National Quality Forum.xxiii Although an HCBS population is, by definition, not institutionalized, some of these measures may well be relevant and merit reconsideration.
In addition, we recognize the existence of several proprietary measures, such as those developed by Press Ganey to assess home healthxxiv and the Supports Intensity Scale to assist in care planning for individuals with intellectual disabilities,xxv which could be relevant to this effort. However, we only included proprietary measures if they were formally submitted to AHRQ in response to the Federal Register Call for Measures by the closing date of July 5, 2007. In addition, several tools and measure sets may be used in the future or adapted for HCBS quality assessment. These include the Home Health version of CAHPS® (Consumer Assessment of Healthcare Providers and Systems), the CAHPS® tool for people with mobility impairments,xxvi the CMS CARE (Continuity Assessment Record and Evaluation) assessment and tracking tool, and the Assessment of Health Plans and Providers by People With Activity Limitations, which is in development. At the State level, Michigan's POSM (Participant Outcomes and Status Measures) tool is being developed in conjunction with the University of Michigan and will be subjected to psychometric testing. All these potential measure sources are summarized in the compendium included in Appendix III.
Finally, there are measures that do not directly assess HCBS program quality but are necessary for risk adjusting those that do. Some are important for adjustment to individual measures, such as diagnosis or residential setting. Others describe systems, such as underlying severity of disability among program participants. At a minimum, the following data are necessary to conduct the State-to-State comparisons mandated by the DRA:
- Assessment or other data on the severity of disability among program participants.
- State-level data on numbers of disabled persons served in various programs and residential settings.
- Demographics of the disabled population, such as:
- Age structure.
- Sex.
- Race.
- Condition-specific information (e.g., dementia).
This project did not look at availability of these data; it merely acknowledges their critical role in placing quality measures in context. Some quality measurement efforts, such as the National Core Indicators Project, track and use these data for risk adjustment and contextual analysis. The validity and acceptability of the State-to-State comparisons mandated by the DRA will rest in part on appropriate use of these contextual variables.