The national evaluation team has worked closely with AHRQ, CMS, and the demonstration States during the 5-year evaluation period. As a result, we have had many opportunities to observe and reflect on the design of the grant program itself. In this section, we discuss our observations about four of the program’s key characteristics:
- The grant program’s resources were spread across many discrete projects.
- Multistate partnerships heightened cross-State learning but posed administrative challenges for demonstration staff.
- The quality demonstration grant program did not explicitly encourage the development of payment models or other approaches to promote sustainability of successful projects after the grant period.
- Implementation of grantees projects was supported by several administrative structures, including full-time project directors, an initial planning period, and no-cost extensions.
- Several aspects the demonstration structure affected the likelihood of obtaining rigorous evaluation results from the beginning.
Allocation of Grant Program Resources
After Congress passed CHIPRA in February 2009, CMS developed the details of the CHIPRA quality demonstration grant program, with input from AHRQ, and issued the grant solicitation on September 30, 2009. Although constrained by four categories stipulated in the CHIPRA legislation, CMS was able to make several decisions that affected the scope of the grants. The first was to restrict applicants for the grants to State Medicaid agencies and not award grants directly to providers. The second was to create Category E, a broad addition to the Congressionally-mandated categories. The third decision was to allow applicants to apply for funding in more than one of the categories. The final decision was to encourage States to collaborate and submit multistate applications, which were permitted by the authorizing legislation. One result of these decisions was a large number of separate projects—52 overall.
Congress appropriated $100 million for the CHIPRA quality demonstration grant program, a substantial Federal investment designed to learn about ways to improve quality of care for children. The value of the 10 grants ranged from approximately $9.8 million to $11.3 million over 5 years and supported from three to nine discrete projects (Table 3). Although projects were not the same size, it is instructive to calculate average per-project funding amounts. As shown in Table 3, the average amount available per project per year varied substantially across the grantees, depending on the number of partner states and the number of categories covered by each partner.
The figures in the last two columns in Table 3 should be interpreted as general indices of the average level of funds available, rather than precise amounts spent on any given project in a particular year. Moreover, grantees and their partner States established grant operations in very different ways, with varying degrees of subcontracting and in-kind contributions. Few, if any, of the States would be able to report dollars per project, because many individuals paid by grant dollars were working on multiple projects at any one time. In addition, most grantees and States requested a no-cost extension, meaning that their award was stretched beyond a 5-year period.
Table 3. CMS grant amounts received, number of projects, and average amount per project per year, 10 CHIPRA quality demonstration grantees
Grantee (Total # States) | Total Amount Received | Total Number of Projects1 | Average Amount Per Project2 | Average Amount Per Project Per Year3 |
---|---|---|---|---|
Oregon (3) | 11,277,361 | 9 | 1,253,040 | 250,608 |
Florida (2) | 11,277,361 | 8 | 1,409,670 | 281,934 |
Maryland (3) | 10,979,602 | 7 | 1,568,515 | 313,703 |
Utah (2) | 10,277,361 | 6 | 1,712,894 | 342,579 |
Maine (2) | 11,277,362 | 6 | 1,879,560 | 375,912 |
Colorado (2) | 7,784,030 | 4 | 1,946,008 | 389,202 |
Massachusetts (1) | 8,777,361 | 3 | 2,925,787 | 585,157 |
North Carolina (1) | 9,277,361 | 3 | 3,092,454 | 618,491 |
South Carolina (1) | 9,277,361 | 3 | 3,092,454 | 618,491 |
Pennsylvania (1) | 9,777,361 | 3 | 3,259,120 | 651,824 |
Total | 99,982,521 | 52 | 1,922,741 | 384,548 |
Source: Centers for Medicare & Medicaid Services (CMS).
Note: The figures in this table do not include in-kind contributions from the States or other Federal agencies, which in many cases were substantial.
1 Number of discrete projects implemented by grantee and partners (see Table I).
2 Total amount received divided by number of projects.
3 Average amount divided by 5. (We did not account for the no-cost extension period.) Amount reflects average level of funds available, rather than precise amount spent on any given project in a particular year.
Overall, the grant program’s large number of projects had benefits and drawbacks. On one hand, the number and breadth of projects provided many opportunities to identify QI strategies across diverse topic areas. Involving a considerable number of States in a large number of projects may have attracted greater contributions by States, health plans, practices, and other funders, leveraging the Federal investment. On the other hand, the States were limited in the scope of certain projects because grant funding was spread thin across so many efforts. For their Category C projects, for example, most States engaged a relatively small number of sites in grant activities. Alaska engaged the fewest practices (three), and Illinois the most (about 25 practices signed up for learning collaboratives). Most others had between 10 and 18 practices. Not only did this limit the demonstration’s potential to have a direct impact on a large number of children’s lives, but the small number of sites also interfered with the ability to conduct rigorous evaluation, as noted in Section 4.
The purpose of the grant program was to “evaluate promising ideas for improving the quality of children’s health care provided under [Medicaid and CHIP].21 States varied in how they pursued promising ideas, which had implications for how grant funds were spent. For some States, this meant demonstrating proof of concept. Alaska, for example, used grant dollars to explore and operationalize the concept of a medical home in a frontier environment. For other States, it meant implementing a pilot study focused in a few locations, with the potential to spread the intervention if the pilot study were successful. For example, Utah used grant funds to support care coordinators in 12 practices; after the grant funding ended, it used another source of funds to spread the use of coordinators to other practices.
Still other States pursued promising ideas by building on previous efforts. The Maryland team, for example, used grant funds to strategically explore avenues for supporting CMEs. Its eventual pursuit of a Medicaid waiver opened a new funding stream that could serve many more children. Another example is Vermont, whose CHIPRA team used demonstration funds to accelerate the timeline for implementing an ongoing statewide initiative (Blueprint for Health) with pediatric practices.
The abundance of projects allowed many efforts to move forward simultaneously in the demonstration States. But demonstration projects may have suffered from being under-funded, making them poorer tests of the promising ideas they explored. Furthermore, the diversity and multitude of projects made it more difficult to summarize the demonstration’s lessons for policy and program administrators. A more focused grant program could have produced more definitive results on fewer topics, rather than drawing more limited conclusions across more topics.
Multistate Partnerships
As noted above, six of the quality demonstration awards involved multistate partnerships (see Table 1). States in these partnerships were committed to learning from and sharing ideas with each other. In all cases, the States allocated time and resources to support these partnerships, although the methods and amount of resources varied. Two of the six grantees (Illinois/Florida and Maryland/Georgia/Wyoming) hired independent organizations to convene the partners and foster cross-State learning.
As described in detail in our sixth Evaluation Highlight, these partnerships had significant benefits and challenges. Several States collaborated closely with their partners by developing joint projects, integrating activities, and setting up complementary implementation schedules. States shared information through activities such as visiting each other’s sites, trading key materials and reports, and scheduling regular teleconferences or in-person meetings. Generally speaking, States found that they offered each other complementary, rather than redundant, skills and expertise.
Interviews with staff and presentations made during several monthly grantee calls hosted by CMS noted the following benefits of these partnerships:
- Fairly rapid and easy dissemination of information about tools, training resources, and other QI initiatives across partner States, thereby filling gaps in expertise and capacity.
- An opportunity to learn the operational details needed to implement a particular strategy from more experienced State staff or consultants, thus potentially avoiding some mistakes.
- Opportunities to expand the spread and potential impact of a project across States.
Staff in most States felt the benefits of partnering outweighed the costs, but also noted the following challenges:
- Working together is both time- and labor-intensive. States reported that project activities took longer to implement than they might have if a State were “going it alone,” especially with regard to financing project work across States, reporting, and decisionmaking.
- Establishing and maintaining contracts and agreements between State governments can result in implementation delays.
Payment and Other Approaches to Sustainability
States tested models for improving child health care delivery, but most did not establish associated payment mechanisms to sustain these models after the grant ended. For example, some States used grant funds to offer payments to practices for participating in QI collaboratives or to provide stipends or salaries for care coordination, but they did not establish ongoing financing approaches, such as care coordination as a Medicaid billable service. As a result, most States did not have the administrative infrastructure or alternative source of revenue in place at the end of the grant to institutionalize incentives for practices to continue QI activities. Notable exceptions to this general observation include Pennsylvania’s continuation of its pay-for-reporting and pay-for-improvement program, South Carolina’s creation of a new children’s health care quality office in its Medicaid agency, and Maryland’s Medicaid State Plan Amendment that provides a funding stream to support CMEs.
Efforts to transform the delivery system are unlikely to be successful unless new payment models emerge to support them. To help promote sustainability of successful interventions, CMS and other funders could consider requiring efforts at payment reform or other sustainability planning to be explicit parts of projects through the application, operational planning, and implementation stages.
Grant Administration and Planning
CMS required grantees to ensure that project directors were available full time for the grant activities. As a result, the 10 project directors were well informed about the operational activities that the States and their partners were implementing through the grant. As was frequently evident on CMS’ all-grantee calls, this allowed CMS to build a community of individuals consistently engaged around and knowledgeable about the goals of the demonstration. One potential drawback to full-time project directors became apparent toward the end of the grant period when project directors sought other positions in anticipation of the grant’s termination. In some cases, the project directors moved to other positions in the State or partnering organizations, and it was difficult to maintain contact with these individuals. Not unexpectedly, some individuals who stepped in to serve as project directors during the grant’s last phase often lacked historical knowledge of grant activities. This could be addressed in future grant programs by providing education to grantees on planning for leadership succession and management approaches to maintain institutional knowledge.
CMS required each State to submit an operational plan, which was due approximately 10 months after the grant award. Key stakeholders in some States noted that this planning period substantially helped subsequent program implementation by better aligning grant activities with what was considered feasible. The period between grant award and submission of the plan allowed the States to refine their proposed plans in response to many factors that were likely to have evolved significantly from the original grant application period. As a result, in certain key respects, some States’ operational plans differed significantly from their applications. In some cases, States realized during this planning period that certain projects proposed in their applications (including some of the health IT-related efforts) could not be practically implemented and therefore shifted funds to other grant efforts.
CMS granted no-cost extensions (NCEs) ranging from 3 to 12 months to nine grantees, and as a result, 15 States continued to operate some aspects of their projects beyond the original end date of February 22, 2015. States used their remaining funds to continue selected program elements, such as quality measure reporting or statewide partnerships, or to complete their own evaluation reports. This extension period also allowed us to gather information about program sustainment that otherwise might have been difficult to collect because key staff would have been hard to contact. Because most States’ NCEs extended beyond the end of this evaluation contract, we were unable to fully assess the influence of NCEs on demonstration activities and sustainability. Future demonstration programs could provide clear and early guidance to participants on whether NCEs might be available and how and when decisions about NCEs might be made.
Grant Structure and Rigorous Evaluation
Because of the emphasis on learning from the demonstration grants, CMS made two important decisions to address the program’s goals for evaluation. First, CMS elected to fund and have AHRQ lead an evaluation of the entire grant program. CMS required grantees to work collaboratively with the national evaluation team contracted by AHRQ and provide access to program data and staff. Second, CMS let States conduct their own independent evaluations using grant funds as long as they were not duplicative of the national evaluation. CMS did not, however, require that States develop or support rigorous approaches to evaluating the impact of their programs, such as having comparison groups to control for non-demonstration influences on demonstration sites. (See Section 4 for further discussion of the implications of this limitation.). Furthermore, many projects were underway and intervention sites had been selected and enrolled before the evaluation contract was awarded, limiting the ability to make changes to support rigorous evaluation.
Many applicants to the program requested grant funds for independent evaluations. When grant awards were less than the amount grantees had applied for, some grantees cut back evaluation budgets. This may have contributed to the need for evaluation-focused TA that the national evaluation team provided to the States and their independent evaluators. (See Section 4 for more on evaluation-focused TA.) It also may have contributed to the level of cooperation States could give to the national evaluation. For example, when the NET attempted to get claims data from States, there were often delays because State programming resources were scarce. Once the data were obtained, often after multiple attempts to get the requested format, the data required extensive cleaning. (See Appendix C for further details.) To minimize such problems, future grant programs could earmark grantee funds for conducting or cooperating with evaluation activities.
We believe that future grant programs with similar goals should include review criteria in the grant solicitation on how well applicants demonstrate that their proposed projects could be rigorously evaluated. Proposals that do not meet minimum standards would not be eligible for funding. Additionally, CMS and other funders could consider including standards for evaluability as part of the process for approving operational plans.
Future grant programs could also provide evaluation-focused TA from the beginning of the grant to increase the opportunities for rigorous evaluation and help build relationships between the grantees and evaluation team. This strategy could be further supported by a requirement for at least quarterly communication between State-based and national evaluation teams to encourage them to develop evaluation plans that complement and build on each other.