CHIPRA Quality Demonstration Grant Program
In February 2010, the Centers for Medicare and Medicaid Services (CMS) awarded 10 grants, funding 18 States, to improve the quality of health care for children enrolled in Medicaid and the Children’s Health Insurance Program (CHIP). Funded by the Children’s Health Insurance Program Reauthorization Act of 2009 (CHIPRA), the Quality Demonstration Grant Program aimed to identify effective, replicable strategies for enhancing the quality of health care for children.
Through this program, 18 demonstration States implemented 52 projects in five categories (Table 1):
- Category A: Grantees enhanced their capacity to report and use the CMS Child Core Set of quality measures and other supplemental quality measures for children.
- Category B: Grantees developed or enhanced health information technology (IT) to improve quality of care, reduce costs, and increase transparency. Grantees pursued a range of health IT solutions, such as encouraging uptake of electronic health records (EHRs), developing a regional health information exchange, and interfacing electronic health information with eligibility systems or social service organizations.
- Category C: Grantees developed or expanded provider-based care models. These models include (1) the patient-centered medical home (PCMH); (2) care management entities (CMEs), which aim to improve services for children and youth with serious emotional disorders; and (3) school-based health centers (SBHCs).
- Category D: Grantees implemented and evaluated the impact of a model EHR format for children, which was developed under a separate Agency for Healthcare Research and Quality (AHRQ) contract, in partnership with CMS.
- Category E: In addition to working in at least one of the other categories, grantees proposed additional activities. These activities were intended to enhance their work under another category or focus on an additional interest area for CMS, such as strategies for improving perinatal care.
The demonstration period began on February 22, 2010, and was originally scheduled to end on February 21, 2015. However, CMS awarded no-cost extensions to all grantees who requested them (Table 1). For 11 States, the grant period will end 1 year later than the original termination date, on February 21, 2016; for three States, it will end 6 months later, on August 21, 2015; and for one State, it ended 3 months later, on May 21, 2015. Three States did not request an extension.
Table 1. CHIPRA quality demonstration projects by grant category
Cat. A Report and Use Core Measures | Cat. B Promote Health IT | Cat. C Evaluate a Provider-Based Model
|
Cat. D Use Model EHR Format | Cat. E Grantee-specified
|
Length of No-Cost Extension | |
---|---|---|---|---|---|---|
Oregon* | ✓ | ✓ | ✓ | 6 months | ||
Alaska | ✓ | ✓ | ✓ | 6 months | ||
West Virginia | ✓ | ✓ | ✓ | 6 months | ||
Maryland* | ✓ | ✓ | 12 months | |||
Georgia | ✓ | ✓ | 12 months | |||
Wyoming | ✓ | ✓ | ✓ | 12 months | ||
Utah* | ✓ | ✓ | ✓ | 12 months | ||
Idaho | ✓ | ✓ | ✓ | 12 months | ||
Florida* | ✓ | ✓ | ✓ | ✓ | 12 months | |
Illinois | ✓ | ✓ | ✓ | ✓ | 12 months | |
Maine* | ✓ | ✓ | ✓ | 12 months | ||
Vermont | ✓ | ✓ | ✓ | None | ||
Coloradov | ✓ | ✓ | None | |||
New Mexico | ✓ | ✓ | None | |||
Massachusetts* | ✓ | ✓ | ✓ | 3 months | ||
South Carolina* | ✓ | ✓ | ✓ | 12 months | ||
Pennsylvania* | ✓ | ✓ | ✓ | 12 months | ||
North Carolina* | ✓ | ✓ | ✓ | 12 months | ||
Total Projects in Category | 10 | 12 | 17 | 2 | 11 |
Source: Centers for Medicare and Medicaid Services (CMS).
*Grantees. Partner States, where they exist, are listed in the rows directly below each grantee.
Evaluation of the Demonstration Grant Program
On August 9, 2010, AHRQ, in conjunction with CMS, awarded a contract to Mathematica Policy Research and its partners, the Urban Institute and AcademyHealth (hereafter referred to as the national evaluation team, or NET), to conduct a national evaluation of the demonstration grant program (go to Appendix A for list of NET staff and technical expert panel (TEP) members)1 The evaluation’s primary objective was to learn about ways to improve the quality of health care for children enrolled in Medicaid and CHIP. Working under the direction of AHRQ and CMS, the NET designed the evaluation to provide insights into best practices and replicable strategies for improving children's health care quality.
To accomplish these goals, the NET gathered a substantial amount of qualitative and quantitative data regarding the demonstration projects implemented by grantees and their partners. Qualitative data sources included program documents and semi-annual and other reports; 776 key informant interviews with grantee and program staff, participating practice staff, and other stakeholders; and 12 focus groups with parents in selected States. Sources of quantitative data included administrative and claims data, self-reported assessments of medical home characteristics in selected States, and original survey data from physicians in selected States. Using a variety of methods, we analyzed these data to address a series of research questions.2 (Go to Section 4 for information on how the evaluation design evolved over time.)
In most cases, we synthesized information from qualitative interviews with grantee and program staff and other stakeholders across similar projects to describe the implementation of demonstration activities, challenges encountered, lessons learned, and perceptions of the influence of demonstration activities on the quality of children’s health care services. We used NVivo,© a qualitative data management and analysis tool to support our exploration of the data. We also intended to conduct formal impact analyses integrating quantitative data to determine whether particular interventions improved child health outcomes. However, for reasons related to data limitations and States’ changes to their original implementation plans, we were unable to complete these analyses.
The evaluation addressed many of the original research questions, which we grouped into the five categories noted above. We also addressed additional questions that, during the course of the project, arose in response to developments in the policy environment or from insights gained during data collection and analysis. Some of these additional questions cut across or built on the five demonstration categories.
To address the needs of stakeholders—including Congress, AHRQ, CMS, States, the provider community, and family organizations—the NET disseminated results of analyses through Evaluation Highlights, implementation guides, manuscripts, and presentations. These products are listed in Appendix B and can be found on the national evaluation’s Web site hosted by AHRQ (www.ahrq.gov/chipra/demoeval/).
To further document our plans and progress in meeting the evaluation’s goals, we provided AHRQ with an evaluation design report (updated three times), a plan for providing evaluation-focused technical assistance (TA) to demonstration States (updated twice), a plan for developing and using our TEP (updated twice), a plan for obtaining feedback from key stakeholders, a dissemination plan (updated twice), four interim reports, and summaries of various meetings held during the course of the evaluation.
Final Report
We have three primary goals for this final report. First, we present a synthesis of select findings contained in the products produced by the National Evaluation.3 We present this synthesis for the five original grant categories and for a category of cross-cutting findings. To develop this synthesis, we reviewed the documents, generated an initial list of key findings and themes, and held internal discussions to identify the most critical ones. Thus, our synthesis is selective, focusing on what we believe are the most useful findings for State and Federal agencies interested in improving the quality of health care for children. Additional findings—and many additional details about the programs that the demonstration States implemented—are contained in the documents themselves. Our findings are presented in Section 2.
Our second goal for this report is to present our observations about the structure of the grant program itself. Specifically, we note the program’s key structural characteristics and discuss their implications for the implementation and sustainability of grantee projects and for the evaluation. We present these observations in Section 3.
Finally, we aim to identify key lessons learned in conducting the evaluation that may help AHRQ or CMS plan future evaluations. Based on our 5-year collaboration with AHRQ, CMS, and the demonstration States, we identified factors that contributed to and hindered the development of rigorous, useful findings from the evaluation. We describe these factors in Section 4 of the report.