Several broad observations around evaluation issues emerged in our 5-year evaluation of the PSML demonstration projects. These issues touch on impact, measurement, and policy.
Efforts to link the issues of patient safety and malpractice liability through the PSML demonstration projects are complex and highly varied.
Applicants for the PSML demonstration grants were required to address both patient safety and medical malpractice outcomes in an effort by AHRQ to make a positive impact in both areas. However, AHRQ’s original request for applications did not specifically describe how these two pieces were supposed to fit together in the projects, and the demonstrations varied considerably in how they sought to bridge the issues.
A few of the projects implemented both clinical interventions (e.g., best practices in obstetrics) and malpractice interventions (e.g., DRPs) within the same institutions without there being a clear connection between the two elements. Some of the projects focused on malpractice interventions with the assumption that these might feed back into hospital-based root cause analysis or other quality improvement processes in ways that could improve patient safety. Others implemented patient safety interventions that plausibly might reduce the occurrence of adverse events and thus stem the flow of subsequent claims.
Of the seven demonstration projects, four (Ascension Health, Fairview Health Services, New York State Unified Court System, and University of Illinois Medical Center at Chicago) attempted to closely explore a causal link between patient safety interventions and subsequent malpractice outcomes, and the strength of findings from these four projects is limited by weaknesses in their research designs. It is also important to acknowledge that practical impediments (e.g., insufficient time for relevant data collection, insufficient numbers of cases to examine statistical effects) made it very difficult for the demonstration projects to undertake this kind of analysis. In sum, seeking to quantify a direct patient safety–malpractice link may not be the most appropriate benchmark for reflecting on the impact of the demonstration projects, given what most of them were actually set up to do.
There is no single, most relevant set of measures for capturing PSML outcomes across diverse studies.
Malpractice and patient safety have a complex relationship that may not be easily or simply reducible to a single set of outcome measures. AHRQ’s initial plan to require the PSML demonstration sites to collect Common Formats data on adverse event reporting, for example, was ultimately not enforced by the Agency, in part because of the questionable relevance of those data across all seven demonstration projects. When project leaders were asked what they believed the single most relevant and appropriate outcome measure might be, as applied to their demonstration project, two Principal Investigators answered “time to resolution of [malpractice] claims.” This is noteworthy for several reasons: (1) it is a relevant and material criterion for some PSML projects but not others, (2) it does not attempt to tie patient safety and malpractice together, and (3) it cannot be generated during a 3-year grant period using the existing administrative datasets that AHRQ had originally hoped would be the main source data for evaluating the PSML initiative.
Basic evaluation challenges are intrinsic to the PSML portfolio.
Building on the preceding discussion, one of our chief observations from the first year of the PSML evaluation was that limits on data were likely to thwart efforts to assess the impact of the PSML projects in a consistent way. Among the threshold challenges identified was the problem of the malpractice “claims tail” (i.e., that malpractice claims frequently take years from the occurrence of an adverse event to surface and/or resolve). Time lags and idiosyncrasies in data gathered by the PSML projects, inapplicability or intractability of existing national patient safety and malpractice datasets, and substantive variation in the interventions across projects made it difficult for many of the PSML projects to document direct patient safety or claims-based outcomes, much less to do so through consistent measures and data across the seven projects. Here again, we are not suggesting that projects have not attempted or achieved useful outcomes, merely that a more idiographic approach to evaluating the projects and to identifying relevant outcomes and data may be necessary.
Meanwhile, national data pipelines on patient safety and malpractice outcomes are limited and involve considerable lag time, suggesting another target of opportunity for AHRQ and policymakers going forward. Better national data, particularly on malpractice litigation and early disclosure and settlement activities, will be a precursor for any future efforts to assess regional and national performance. Recent changes to the National Practitioner Data Base (NPDB) have made it even less useful for evaluation going forward. Therefore, future large-scale initiatives that build on the PSML experience and target malpractice outcomes will need to be funded for longer grant periods (6–10 years), allowing for the maturation of the “tail” of claims data and the assessment of the impact on malpractice outcomes. AHRQ and other funders will also need to pay closer attention to the quality of the study designs as well as the grantees’ ability to collect data to ensure that the evidence generated can support policymaking.