Program performance is a very broad quality dimension that theoretically encompasses all aspects of Medicaid HCBS quality. If client functioning and client experience are the outcomes of a Medicaid HCBS program, then program performance assesses the process measures related to those outcomes. There are many correlates of client outcomes in functioning and satisfaction, some that are under the control of Medicaid HCBS programs, and some that are not. Yet program performance itself can be measured against a set of expectations for what a quality program should entail. Articulating those expectations was a central challenge of the Measure Scan project.
One option for defining "program performance" would be to look to the existing expectations for quality held by CMS, the Federal agency with regulatory authority for the Medicaid program. For at least one Medicaid HCBS program, the 1915c waiver authority, there is a legislative definition of program performance standards, namely the six assurances articulated in CFR §441.302. State Medicaid 1915c programs are currently required to provide evidence to CMS of their compliance with these six key dimensions of program performance, including development of individualized plans of care for participants, safeguarding of participant health and welfare, and integrity of program payments.
Over time, Federal expectations of the type of evidence required to illustrate compliance with the assurances has evolved, with an ever-increasing emphasis on the collection and analysis of robust, population-based data. As a result, many State programs are actively working on creating valid and meaningful measures of program performance aligned with the assurances, most of which are in their infancy.
Although not mandated, the CMS Quality Framework, released by the agency in 2003, is another potential CMS paradigm for defining program performance. The Quality Framework includes seven "focus" areas for quality oversight, including provider capacity and participant access, in order to comprehensively assess program performance.xix State participants, as well as associations representing State officials, told us that the Quality Framework was also a driving force in how State Medicaid HCBS programs defined and measured program performance.
The importance exercise we undertook with stakeholders did not address overall expectations for program performance per se. Rather, as described in Section II, TEP members and other stakeholders ranked a series of constructs according to their perceived importance for measuring HCBS quality overall. All are arguably measures of program performance in one way or another. We then backed those constructs into the client experience and functioning DRA domains, leaving the residual constructs to define program performance. These remaining three constructs, access to case management, care coordination, and receipt of all services in the care plan, are discussed below. While each may capture one dimension of program performance, they are in no way meant to encapsulate the entire domain.
The definition and role of case management services, sometimes known as service coordination or care management, varies across State Medicaid HCBS programs. Within self-directed programs, this function, often known as support brokerage, differs even more, in recognition of participants' greater role in choosing and directing providers and services. Furthermore, case management services of any type are not required by all HCBS recipients. Nonetheless, the function of managing and overseeing the care plan that is often performed by case managers can play an important role in facilitating quality outcomes, such as the receipt of needed supports.
"Access" itself is a necessary component of obtaining high-quality services.18 Individual access to case management or supports brokerage, at least for those who receive such services, is potentially one measure of program performance relative to expectations. The related concept of care coordination among different providers and payers has been theorized to enhance individual client outcomes. Programs that can coordinate the services provided to an individual from disparate sources can increase the likelihood of positive health outcomes. They can also decrease the risk of negative outcomes, such as contraindicated medications and other therapies.
Both case management and care coordination can affect receipt of all specified care plan services. This final construct was identified as a floor for program quality by stakeholders. In addition, a 2003 review of Medicaid HCBS quality by the Government Accountability Office (GAO) found that many program participants were not receiving the services specified in their care plans. GAO cited this finding as evidence of inadequate program oversight.19
Gap Analysis
Table A.V.3b includes the small number of extant measures we found for these three constructs that met the threshold criteria. Relative to the other two domains referenced in the DRA, we found the fewest potential candidate measures for program performance. This may be a function of the program-specific nature of these constructs, as well as larger methodological issues in defining them.
We found few prevalent, common measures for the receipt of case management services (including care coordination and supports brokerage) or access to such services. These data could theoretically come from either client report or administrative sources. However, because these concepts are poorly defined, both within the HCBS world and the larger health care delivery system, measurement from either source is challenging. Draft regulations from CMS have recently redefined the scope and reimbursement of case management services within Medicaid HCBS,xx making this very much a moving target for assessment. Similarly, frameworks for care coordination are still under development, with a National Quality Forum membership meeting on this topic planned for later this year.
Despite the uncertainty, we noted that satisfaction with, receipt of, and perceived value of case management were addressed in many State-specific survey tools. These constructs, however, are not the same as access to services. Access can be measured, for example, in terms of ratio of professionals to participants, waiting time to reach staff, staff responsiveness to requests for assistance, or reimbursement/eligibility for care management services. It is this latter type of access metric that may necessitate further development or refinements.
Similarly, we did not find simple, standardized measures for the receipt of all services specified in the plans of care. This is one of the constructs that TEP members suggested could be measured with administrative data, such as claims data or internal quality management data from file reviews or other sources. State Medicaid programs are likely already examining this issue of service plan implementation within their HCBS programs, although their "measures" for doing so did not turn up in our scan in great numbers.
A final issue for consideration is the role of individual measures for program performance versus system measures. Measures of client function and experience, by definition, are based on client-level data, aggregated to programs. Program performance measures, including some of those noted above that assess aspects of the CMS assurances, may be constructed from individual measures or may take other aspects of the program as their unit of analysis. For example, if access to case management is measured through the number of case management staff or agencies who meet defined provider qualifications, client-level data will not be appropriate. Similarly, care coordination may be evaluated by staffing patterns, interagency coordination agreements, or treatment protocols, which are assessed at a systems, rather than individual, level.