More About the National Evaluation
Contents
- Overview
- Evaluation Objectives
- Data Sources
- Demonstration Projects by Grant Category
- Evaluation Questions
- Evaluation Design Plan
- Learn More About the CHIPRA Quality Demonstration
Overview
In August 2010, the Agency for Healthcare Research and Quality (AHRQ), in partnership with the Centers for Medicare & Medicaid Services (CMS), awarded a contract to Mathematica Policy Research and its partners, the Urban Institute and AcademyHealth, to evaluate this demonstration grant program—the largest evaluation of child health care quality currently underway in the United States. Together, the grant program and its evaluation were among the Nation's most important efforts to improve the quality of health care for children enrolled in Medicaid and the Children's Health Insurance Program (CHIP).
The national evaluation team is charged with conducting a rigorous evaluation of the CHIPRA Quality Demonstration Grant Program to determine the impact of grantee activities on the quality of children's health care.
Evaluation Objectives
As described elsewhere, 18 States implemented 52 projects in 5 broad categories (go to Table 1 below). The national evaluation team gathered many different kinds of qualitative and quantitative data about these projects and about the collaboration among grantees and their partners. They analyzed this information to conduct a multi-level evaluation that had the following objectives:
- Assess the implementation of single projects independently of all others, focusing on whether the project's goals and objectives were achieved.
- Combine information across projects within a single grant category to identify effective strategies and successful outcomes.
- Examine how specific States improved the quality of children's health care by implementing multiple projects and describe how the activities in one grant category supported or enhanced projects in other categories.
- Determine the extent to which collaborations among States contributed to the success of the demonstration activities.
- Assess the overall benefits of the demonstration program by comparing selected outcomes of the participating States with those of non-participating States.
- Examine the contributions of demonstration activities to improvements in quality of care with respect to four special interest areas: oral health, obesity, behavioral health, and Early and Periodic Screening, Diagnostic, and Treatment (EPSDT) programs.
- Provide insights into the successes and limitations of the program to inform future Federal demonstration efforts.
Data Sources
Sources of quantitative data included administrative and claims data, original survey data from providers, and data from the State or the grantee-specific evaluation teams. Qualitative data sources included program documents and reports, key informant interviews with program staff and stakeholders, and information gleaned from focus groups with families.
Demonstration Projects by Grant Category
Table 1. CHIPRA Quality Demonstration Projects by Grant Category
Using Quality Measures to Improve Care Quality | Promoting Use of Health IT to Enhance Quality of Care | Evaluating a Provider- Based Model | Testing a Model EHR Format for Children | Testing State-Specified Approaches to Improving Quality | |
---|---|---|---|---|---|
Alaska | √ | √ | √ | ||
Colorado* | √ | √ | |||
Florida* | √ | √ | √ | √ | |
Georgia | √ | √ | |||
Idaho | √ | √ | √ | ||
Illinois | √ | √ | √ | √ | |
Maine* | √ | √ | √ | ||
Maryland* | √ | √ | |||
Massachusetts* | √ | √ | √ | ||
New Mexico | √ | √ | |||
North Carolina* | √ | √ | √ | ||
Oregon* | √ | √ | √ | ||
Pennsylvania* | √ | √ | √ | ||
South Carolina* | √ | √ | √ | ||
Utah* | √ | √ | √ | ||
Vermont | √ | √ | √ | ||
West Virginia | √ | √ | √ | ||
Wyoming | √ | √ | √ | ||
Total Projects in Category | 10 | 12 | 17 | 2 | 11 |
Source: Final operational plans, evaluation addenda, and evaluation planning calls with grantees.
*Grantees.
Note: Health IT= health information technology; EHR= electronic health record.
Evaluation Questions
AHRQ and CMS identified 20 broad research questions and well over 200 detailed ones that the national evaluation could choose address in evaluating the demonstration projects. Examples of the broad questions include the following:
- How did stakeholders use the initial set of core quality measures for children, and what was their impact on the delivery system?
- What health information technology (IT) or health IT enhancements were effective in improving quality of care or reducing costs?
- How were the provider-based models implemented by the demonstration States, and did they change children's health care quality?
Findings from the national evaluation were organized, published, and disseminated in ways that addressed the needs of stakeholders, including Congress, AHRQ, CMS, States, the provider community, and family organizations. The national evaluation team disseminated results of analyses with evaluation highlights, implementation guides, journal manuscripts, and presentations.
For published reports, visit the What We Learned page.
Evaluation Design Plan
This final design plan presents the national evaluation team's plan for evaluating the CHIPRA Quality Demonstration Grant Program. It was a "living document" that evolved as the States' programmatic and evaluation activities were shaped by actual implementation experiences. Grantees and partner States varied widely with respect to implementation schedules for specific activities. This document describes the evaluation design based on information provided by the States in February, 2014.
- Summary of the Final Evaluation Design Plan (PDF, 111 KB), May 2014.
Learn More About the CHIPRA Quality Demonstration
- Learn more about the demonstration projects from reports and resources produced by the demonstration States.
- To learn about each of the grant categories, please use the left navigational bar, organized by 5 areas of focus.
- Learn more about the projects in each demonstration State.
Please note: This Web site uses the term "national evaluation" to distinguish this evaluation of the entire demonstration program from evaluations commissioned or undertaken by grantees. The word "national" should not be interpreted to mean that findings are representative of the United States as a whole.