The Quality Paradigm
Assessments of healthcare quality tend to focus on discrete elements or facts that are predefined, prospective, and measurable. The quality paradigm is based on the foundational Donabedian model,7 published in 1966, that includes assessment of structures, processes, and outcomes and is heavily reliant on metrics and rapid cycle improvement methods attributed to Deming.8
In the context of diagnosis, quality metrics are primarily used to assess standard processes or objective findings, such as reliably completing a task or achieving a series of actions in order to achieve a timely and accurate diagnosis. This approach often requires durable artifacts, or at least distinct data elements, that can be reviewed during or after the diagnostic process. Metrics in the quality paradigm may or may not be outcome-based. The quality paradigm also requires that a goal, or specific standard, is predefined before measurement. Thus, quality measurement is designed to be objective.
Assessment of diagnosis, as a process, can benefit from the quality paradigm to evaluate discrete phases of work or durable artifacts. In contrast, diagnosis, as a conclusion, is less suited to quality metrics since end points can often be ill defined. In addition, diagnostic labels tend to be dynamic as new information is integrated and patients’ conditions change in response to interventions or with disease progression. Thus, judgments about diagnostic labels tend to be subjective and retrospective.
Quality is typically judged as the result of standardizing care and avoiding process variation, which suits diagnostic processes but does not allow flexibility and adaptability needed for diagnostic labels. The quality paradigm is widely used for institutional quality improvement activities. It is also applied in clinical quality measures widely used in public reporting and value-based programs required by the Centers for Medicare & Medicaid Services, the Joint Commission, and other governing bodies.9 Lean methodology is one example of use of the quality paradigm to improve care, a method focused on efficient and consistent processes that are iteratively refined.10
The Safety Paradigm
Patient safety has been defined as “freedom from accidental injury”11 and is rooted in recognition of error (the failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim12) that creates harm or risk of harm. The national consciousness of medical error was stirred by the landmark 1991 Harvard Medical Practice Study13,14 that described the epidemiology of medical error and reported on the incidence of adverse events in hospitalized patients.
Not until the 1999 report by the Institute of Medicine (IOM, now National Academies),11 did public attention become focused on adverse events. These are defined as events involving injury or harm resulting from a medical intervention (or lack of intervention) rather than the patient’s underlying condition.11,13,14
Early on, clinicians and researchers argued that not all adverse events or undesired outcomes, such as postoperative pneumonia, were preventable. Thus, a subcategory of adverse events ascribed to error or substandard care, called preventable adverse events, was created to capture errors that result in injury.13
The first IOM report was followed by two others, Crossing the Quality Chasm15 and Improving Diagnosis in Healthcare,3 that declared improving diagnosis a “moral and professional imperative.” Much of the work on diagnostic improvement has been launched in response to these reports and relies on definitions and frameworks outlined in them.
As the field evolved to consider diagnostic errors as a subset of medical error, the term diagnostic adverse event16 was introduced and defined as diagnostic errors that cause harm. However, in this context, it has been difficult to evaluate harm (or change in outcome) independent of the underlying disease. Debate often revolves around whether something undesirable or suboptimal occurred and then, whether some error or suboptimal care caused or contributed to harm or worsened prognosis.
Inevitably, a conclusion that something is an error requires one to also judge if it was preventable, which can be complicated and difficult to assess for diagnostic errors. For example, if something is judged to be preventable, was it preventable under the circumstances at the time or preventable under ideal circumstances? Or perhaps it could be preventable after system redesign?
In contrast to general problems in patient safety that tend to focus on iatrogenic harm or dangerous actions, diagnostic safety can refer to two different types of harm. Based on models from safety science, the focus could be on direct harm from the diagnostic process itself, such as a pneumothorax from a lung biopsy or a bowel perforation complicating a diagnostic colonoscopy. In these cases, specific events cause harm that is independent of the underlying disease and the event is tangible, observable, measurable, and perhaps (but not necessarily) preventable. Few investigators focus on these types of diagnostic adverse events. In fact, many include them under a different classification of procedural complications.
The diagnostic process can also result in indirect harm, such as when attempts to rule in or rule out medical conditions result in excessive testing or when insignificant incidental findings result in psychological distress, excessive testing, or expenses for patients.17
Historically, the concept of diagnostic safety has largely focused on failure to adequately diagnose, in which case, the harm comes from the underlying disease and is due to failure to apply medical science and skills to optimize patient treatment options and outcomes. But the concept of diagnostic safety is a bit forced. It does not neatly follow the logic of quality or safety science. The triggering event may not be visible or attributed to a single event or moment in time. A combination of improvement methods can be used, but because the outcome and harm are linked to the underlying disease, it is difficult to standardize, benchmark, or distinguish how much harm comes from the underlying condition versus how much additional harm is due to an “error” or “safety event.”
In the first decade after the 1999 IOM report, patient safety efforts tended to focus on identifying, reporting, measuring, and addressing medical errors. The patient safety movement gave rise to new concepts such as:
- Never events18 (the most egregious events that should never occur).
- Serious reportable events in healthcare19,20 (first defined and used by the National Quality Forum).
- Sentinel events21 (a patient safety event that results in death, permanent harm, or severe temporary harm, as used by the Joint Commission).
The patient safety movement has transformed how we approach improvement science in healthcare by introducing new ideas and methods from systems and human factors engineering. Patient safety concepts were initially developed from industrial safety principles that use a systems perspective to analyze accidents.22 Many people found keen insights from Reason’s Swiss Cheese Model, which asserts that adverse events result from failures passing through multiple holes in layers of “defenses” and explains that a confluence of many circumstances often contribute to a final failure.23 While this model oversimplifies actual clinical circumstances, it clearly shows how many layers of latent flaws, preconditions, and unsafe acts align to create vulnerable moments of risk.
At their onset, patient safety efforts took lessons from other industries and acknowledged the need to take a systems approach to design safe systems of healthcare. Such efforts began to place responsibility for safe care on healthcare organizations, a distinct difference from prior generations of work that focused on providers or local sites. Work in resilience engineering expands the analysis of conditions impacting healthcare delivery safety and quality beyond healthcare delivery organizations to include healthcare insurance policies, funding systems, and regulations up to the national level.24
Safety-I and Safety-II Approaches
The traditional approach to safety, sometimes called Safety-I, uses a systems perspective to focus on adverse events that cause harm or create risk (e.g., an incorrect diagnosis). This approach retrospectively analyzes the event to attempt to establish causation, identify contributing factors, and implement conditions to prevent recurrence.25
A common technique used in Safety-I is a root cause analysis, which assumes that processes or events can be understood by deconstructing them and often assumes that outcomes are binary (e.g., processes work or they do not work). While the name “root” implies a single cause, the method as described allows for multiple contributing factors that create preconditions that pose risk or directly cause harm.
For solutions, Safety-I focuses on system design, constraints, and standardization to prevent harm and asserts that problems occur when people do not follow established processes or do not behave as they were trained to behave. Mostly safety is event centric and outcome based, although attention sometimes includes near-misses and unsafe conditions. The goal of Safety-I is to prevent, avoid, minimize, or mitigate error and harm. Safety-I often sees the human actor as a vulnerability, although it acknowledges that latent flaws in the system create preconditions of risk and situations of vulnerability.
Safety-I often defines factors that contribute to adverse events as failures or errors, even if it uses a systems perspective and diffuses that responsibility by pushing it “upstream” to those who make decisions about resource allocation and further from the “sharp end” providers who interact directly with patients. One consequence of defining failure or problems as “errors” is a strong tendency by those unfamiliar with nuances of safety precepts to ascribe blame, and blame tends to fall on individuals. Regardless of how the field has tried to advance, a focus on “errors” tends to create a defensive posture that limits constructive discussions and efforts at improvement.
Safety-II theory applied to diagnosis posits that healthcare delivery is a complex adaptive system, and the numerous factors that contribute to (or hinder) making a correct diagnosis may interact in unpredictable ways.25,26 Achieving a correct diagnosis may require process variations and adaptations. The capacity to diagnose adequately and correctly may be increased by studying diagnostic successes as well as failures and ordinary conditions. The goal is not just to prevent harm, but to achieve success. An underlying premise is that the knowledge and experience of the diagnostician is valuable.
The focus of Safety-II is to improve the likelihood of success by creating conditions and capacities favorable to success. Safety-II more often views the human actor as an asset, capable of reacting and adapting to novel circumstances and challenging situations to achieve desired outcomes.
Do You Mean Quality or Safety?
The quality paradigm predates the safety paradigm, but over time, the distinction between the two paradigms has sometimes blurred. As with all linguistics, use defines meaning, and meanings change over time. However, the original terms influence how one studies and approaches improvement. Thus, there is value in maintaining some historical understanding of the terms and basic definitions. Good ideas often diffuse, and as they do, combinations of improvement terms and methods are used as they suit the task.
Diagnostic Excellence Paradigm
The phrase diagnostic excellence6,27,28 describes a hybrid model that applies to both quality and safety paradigms. As a principle, the goal is to deliver care that is optimal while balancing, or at least acknowledging, multiple dimensions that include effective, timely, efficient, safe, equitable, and patient-centric diagnosis, the same parameters defined in the six domains of healthcare quality.15 However, unlike the quality or safety paradigms, diagnostic excellence suggests an even higher bar for optimal performance.
Standards for a comprehensive measure of diagnostic excellence may be difficult to define as a whole. However, parameters of excellence can be defined using traditional quality methods augmented by avoidance of harm and achievement of success through methods from safety science. The concept of excellence requires judgment about what parameters are most important to prioritize for a given situation, and standards vary depending on the health condition and its risk to the patient.
For example, timeliness for diagnosis of an ST elevation myocardial infarction (STEMI) differs from timeliness expected for a diagnosis of celiac disease. It is also important to recognize that all these parameters may not be equally important or even possible to satisfy at any one point. Efficient and timely may compete with effective and safe if risky shortcuts are taken (e.g., chest pain attributed to a musculoskeletal source without confirmation on exam or testing to exclude a more serious explanation when the patient may have risk factors for ischemic chest pain).
The term “diagnostic excellence” adds to the historical paradigms of diagnostic quality and safety. It fosters a growth-oriented mindset and is a comprehensive term that encompasses the goals of accuracy and effectiveness in diagnosis and minimizing harm. Diagnostic excellence elevates the diagnostic terminology to a positive perspective and shifts the focus from “errors or blame” to the aspiration of achieving optimal diagnostic structures, processes, and outcomes.
Unlike some error-focused terms, such as “diagnostic error,” that emphasize shortcomings, diagnostic excellence is inherently forward looking and constructive. The intent of pursuing diagnostic excellence is to raise the ceiling for performance, which generally aligns with Safety-II principles. In comparison, the quality paradigm tends to focus on defining a floor for performance and the Safety-I approach focuses on avoidance or mitigation of harm.