As AHRQ moves into the next phase of fulfilling its DRA mandate, measure development in the three specified domains, there is an existing body of science to support this effort, albeit with key gaps. Related methodological and other issues will play an important role in measure development and implementation. Together these provide a potential roadmap for AHRQ's future work.
The available science includes a number of well-tested and widely used consumer surveys assessing consumer experience with HCBS services, including unmet need for support with everyday activities, social role functioning, and satisfaction with program supports. Portions, or even all, of these tools could be used with minimal investment in additional testing and development.
The gaps in the existing measure set fall into three categories. First, there are the constructs for which few tested and relevant measures are available. These are primarily in the program performance DRA domain related to case management access, care coordination services, and receipt of all services in the care plan.
Second are gaps in testing of applicability of extant measures to a broader array of recipients and settings. These include consumer survey measures developed for a specific population, such as individuals with intellectual disabilities; a specific setting, such as a nursing home or assisted living facility; or a specific program type, such as self-direction. If AHRQ chooses to develop a modular system that includes both common measures and supplemental measures for specific populations, settings, and service delivery models, such testing becomes less pertinent. In contrast, a cross-disability approach argues for more testing and refinement of specialized measure sets. A related gap is the relative lack of testing of several State-specific tools with items that closely align with important measure constructs.
Finally, there is a need for further specificity and exploration of concepts specific to HCBS programs. These include definitions of client-reported abuse, serious reportable events in HCBS settings, and recommended preventive health services for these populations, as well as appropriate metrics for avoidable hospitalizations.
These gaps only pertain to the list of 21 constructs listed in Table 1, identified through the process we used to seek TEP input. As noted earlier, there are other measurement constructs arguably related to HCBS quality that AHRQ may wish to further consider. In addition, other models of defining the three DRA domains could be used, notably the existing CMS statutory requirements for documenting quality in Medicaid 1915c programs that could become a rubric for the program performance domain. Some Federal staff also encouraged alignment with existing national consensus bodies and the use of existing voluntary consensus standards, such as those approved by the National Quality Forum.
Beyond the gaps noted above, AHRQ will need to address several important methodological issues, including data collection methods, respondents (participant or proxy), data sources (consumer survey vs. administrative sources, including claims data) and the contextual data needed for risk adjustment that is key to comparing States. Specifically, the agency could address this latter issue by developing a comprehensive set of individual demographic and disability measures related to individual functioning, along with environmental service variables. AHRQ may also want to revisit measures that were still under development at the time of the scan and explore proprietary measures not formally submitted in response to the Call for Measures. Finally, the evolving nature of Medicaid HCBS programs, including the growing role of self-directed services, argues for flexible measures that anticipate future delivery models and client expectations.