Ideally, members of the PCA diagnostic team will support and augment each other’s performance. As diagnostic PCA teams are envisioned, we recommend extending the team-based healthcare principles IOM proposed to include AI as a member of the diagnostic team. These principles include establishing shared goals, clear roles, mutual trust, effective communication, and measurable processes and outcomes (Table 1).1
Table 1. Principles for patient-clinician-AI diagnostic teams
Principle | Definition | Patient/ Caregiver | Clinician | AI |
---|---|---|---|---|
Shared goals | The team, including the patient and, where appropriate, family members or other support people, works to establish shared goals that reflect patient and family priorities and can be clearly articulated, understood, and supported by all team members. | The patient and their family member or other support person inform the clinician about a health problem. They describe this concern within the patient’s context and express the patient’s values, preferences, and circumstances. |
The clinician validates the concern of the patient, family member, or other support person. The clinician agrees to partner with the patient/family/support person to develop a plan to diagnose and treat the patient’s health problem. |
The clinician recommends using an AI-based algorithm developed to evaluate the patient’s health problem. |
Clear roles | Clear expectations for each team member’s functions, responsibilities, and accountabilities optimize the team’s efficiency and often make it possible for the team to take advantage of division of labor, thereby accomplishing more than the sum of its parts. | The patient and their family member or other support person agree to report new updates related to the patient’s status. They also agree to enable the AI-based application and ensure that it is used as recommended. |
The clinician uses data from the patient’s history, physical examination, test results, and AI algorithm to diagnose the patient’s health problem. The clinician will incorporate the patient’s values, preferences, and circumstances to provide recommendations for testing and treatment. The clinician is responsible for ensuring that roles are clearly defined. |
The AI-based algorithm runs as expected when device is used. It provides daily summaries and notifies the patient, family, and clinician of relevant events. |
Mutual trust | Team members earn each other’s trust, creating strong norms of reciprocity and greater opportunities for shared achievement. | The patient and their family member or other support person have rapport with the clinician. The patient trusts that the clinician and AI will effectively complete their assigned tasks (e.g., taking a careful history and physical examination, monitoring, estimating risks, predicting outcomes, developing an evaluation and treatment plan consistent with their values and preferences) to aid the diagnostic process. | The clinician has reviewed the safety, validity, and reliability of the AI-based application. In addition, the clinician has successfully used the application for other patients. The clinician trusts that the patient, family, other support person, and AI can effectively complete their assigned tasks (expressing values and preferences, completing recommended diagnostic testing, monitoring, reporting status changes, etc.) to aid the diagnostic process. |
The AI relies on patients, family or other support people, and clinicians to complete their assigned tasks (wearing the technology, ordering tests, having tests completed, interpretating model output, etc.). |
Effective communication | The team prioritizes and continuously refines its communication skills and has consistent channels for efficient communication. | The patient and their family member or other support person communicate with the AI by providing data (e.g., via wearable device) needed to estimate risk and identify events. The patient and their family member or other support person provide status updates by communicating with the clinician (or other members of the healthcare team) in-person, virtually, or electronically. |
The clinician (and other members of the healthcare team) efficiently and effectively respond to the patient’s status updates and questions. The clinician may also serve as the liaison between patients, their family members, other support people, and the AI. Clinicians use shared decision making that integrates patient values, preferences, and circumstances, available data, and model outputs to develop a care plan. |
The AI receives input from patients and clinicians to generate outputs that aid the diagnostic process. The AI clearly communicates its outputs to patients, their family members or other support people, and clinicians in a timely fashion that does not negatively impact lifestyle or workflow. |
Measurable processes and outcomes | Reliable and ongoing assessment of team structure, function, and performance is provided as actionable feedback to all team members to improve performance. | The patient and their family member or other support person provide regular updates related to the perceived utility and feasibility of the diagnostic plan (e.g., barriers to having testing completed, barriers to using AI). | The clinician must regularly evaluate the risks and benefits of the team’s structure and function. The clinician should receive feedback from patients about the team’s structure and effectiveness. Clinicians should become comfortable communicating feedback about a diagnostic model’s utility to developers and health system leadership. |
Continued monitoring by the model may provide updates about the patient's status after an intervention designed to address the health problem. AI outputs may include the frequency and duration of patient or clinician use of the application. |
Shared Goals
In PCA teams, the process of establishing diagnostic goals moves from a bidirectional exchange between the patient and the clinician to a tridirectional exchange. In a tridirectional exchange, patients and clinicians receive information from AI systems, AI systems receive information from patients and clinicians, and patients and clinicians receive information from one another.
Each member of the PCA diagnostic team brings their own unique goals to the relationship. For example, clinicians’ goals may include providing safe, efficient, evidence-based care. Likewise, AI algorithms are ideally designed with the goal of effectively performing specific tasks (e.g. recognizing an arrhythmia), often with little consideration of the patients’ or clinicians’ context.
The patient’s values, preferences, and circumstances should provide the foundation on which team goals are established. Patients, their families, or other caregivers must be comfortable describing their unique situation and their wishes. Clinicians should be adept at patient-centered communication to effectively guide these conversations. In addition, AI should be designed and deployed in a way that is conducive to meeting specific diagnostic goals. Ultimately, clinicians will be responsible for ensuring that the goals of each team member are carefully considered and that the team’s shared goals coalesce around patients’ values, preferences, and circumstances.
Clear Roles
Clear, specific delineation of roles within the PCA team is essential to the team’s success. In most cases, the diagnostic process is triggered when a patient notes a specific sign or symptom and notifies their healthcare provider. Increasingly, the diagnostic process may be triggered by patients wearing direct-to-consumer AI-based technologies.
Ideally, these AI alerts will lead to the patient engaging with the healthcare system and providing the information needed for accurate and timely diagnosis and treatment of their health concern. For instance, a patient may be notified by their smart watch (AI role) that they are experiencing tachycardia or an abnormal heart rhythm, which leads to the patient notifying their healthcare provider (patient role). In this situation, a clinician would need to be familiar with the validity, reliability, and intended use of the AI algorithm and the patient’s context to determine the need for additional evaluation (clinician role).
Another use case involves the clinician recommending a specific wearable or other AI-based technology to perform a specific task (e.g., information gathering, integration of information,) for diagnostic purposes. For example, a clinician may recommend a wearable technology to assess a patient’s fall risk.29 The patient, family member, or other caregiver would be responsible for ensuring the technology is worn, while the AI would provide daily summaries about fall risk and near falls and notify family members or the clinician if a fall occurs. This information could lead to interventions to prevent falls.
While patient and clinician autonomy should be preserved, AI will have variable levels of autonomy depending on the diagnostic problem. In addition, clinicians and patients will have varying levels of comfort with the level of autonomy assigned to AI systems. For example, in some cases AI will provide a diagnosis independent of a clinician. The AI-based IDx-DR system, which autonomously diagnoses diabetic retinopathy without clinician overreading, is an example of a fully autonomous model.30
Alternatively, an algorithm may alert clinicians to an increased probability of a diagnosis, leading to more efficient triage and diagnosis of a problem. For instance, an AI-based system designed to identify intracranial hemorrhage on CT images may lead radiologists to prioritize earlier reading of CT scans designated as high risk.31
Finally, we acknowledge the rapidly evolving field of AI. Patients, clinicians, and developers will need to be nimble and capable of adapting as roles are likely to change. For example, referring to the previous example, one can imagine AI directly notifying the clinician of actionable information (responsibility moves from the patient to AI), leading to more efficient diagnosis. This type of change may require an adjustment of processes and workflow to accommodate the new stream of information from a different source that is presented in a different way and perhaps at different times.
Mutual Trust
Assigning roles may prove futile if trust between team members is not established. Within the PCA triad, mutual trust is needed between patients and clinicians, and patients and clinicians need to trust the AI system’s ability to meaningfully contribute to the diagnostic process. How to establish trust between AI and patients/clinicians is an area of active research.32,33
Rojas and colleagues suggested that clinician trust in AI be informed by the system’s fairness, transparency, and performance.34 To this end, the Office of the National Coordinator for Health Information Technology recently proposed the HTI-1 rule, which would require users of clinical decision support systems using AI to have access to answers to three basic questions35:
- What data were used to train the algorithm?
- How should the predictive algorithm be used, updated, and maintained?
- How does the algorithm perform using fairness metrics in testing and in local data, if available?
As described above, in some cases clinicians will introduce diagnostic AI algorithms to patients. In other cases, patients will bring new AI-based technologies to clinicians. Regardless of the initiator, clinicians will need to develop the skills to critically evaluate AI-based diagnostic systems and make recommendations about their use. Thus, prospective studies are needed that demonstrate AI systems’ reliability, validity, and positive effect on important patient outcomes.
As with other healthcare interventions, patients are unlikely to adhere to plans that include AI if they do not trust the provider recommending the technology or do not trust the technology itself. These feelings may be especially common among historically marginalized groups.36,37 Biased algorithms can worsen health inequities, as described by Obermeyer and colleagues. They showed that a widely used algorithm demonstrated signficant racial bias, as it disproportionately recommended additional assistance for White patients compared with Black patients, despite Black patients being sicker.38 To mitigage bias and increase patient trust, rigorous strategies must be used during the development and implementation of algorithms.39
Similarly, clinicians will need to trust that AI-based technologies are safe and effective before recommending them to patients. In addition, clinicians must trust that patients are experts about their experiences, context, and values.40 Effective communication between all team members will be essential to developing the mutual trust needed for optimal functioning of the PCA team.
Clinicians will need education and training related to the use of AI/ML in the diagnostic process, which is beyond the scope of this brief. However, we acknowledge that they will need to become adept at appraising information describing AI/ML interventions, applying the output of diagnostic models, and communicating the role AI/ML played in the diagnostic process.41 This education and training will also be essential to avoiding underreliance or overreliance on AI/ML in the diagnostic process.
Effective Communication
Communication between humans and AI comes in many forms. For example, patients may communicate with AI systems by speaking; entering text, photos, or other types of data into the system; or using wearable devices (e.g., smart watches) that provide continuous monitoring. The AI communicates by producing an output in the form of an alert, reminder, probability, diagnosis, recommendation, or intervention.
AI communication with patients will depend on its purpose, goals, and FDA classification. For example, non-FDA-regulated technologies should not provide a diagnosis and may instead provide an alert to a potential abnormality (e.g., bradycardia). Outputs of FDA-regulated technologies will depend on the risk classification of the technology, with higher risk models producing outputs within the context of high-risk situations (e.g., ML-based cardiac defibrillator).
In fall 2022, the FDA provided updated guidance on designation of software as a medical device (SaMD).42 Previously, most EHR software did not meet SaMD criteria. However, this situation is changing as EHRs, hubs for healthcare communication, are increasingly incorporating AI-based predictive algorithms designed to augment diagnostic decision making.”34,43
Clinician interaction with diagnostic AI algorithms will most often occur via the EHR or patient-owned wearable devices. Developers and health system leaders should proceed with caution to avoid making an already overwhelming EHR workload unmanageable with the addition of AI. For example, clinicians could quickly become discouraged by AI if they open the EHR to find patient-generated data lacking clinical context for multiple patients. While the data may contain valuable diagnostic information, attention must be given to the form and frequency of presentation of these data.
Patients and clinicians should be able to seamlessly communicate with AI. Therefore, anyone developing or implementing these models should consider usability. Patients and clinicians are not likely to engage with these potentially useful technologies if they significantly disrupt lifestyle or clinical workflow, resulting in inconvenience and inefficiency. Furthermore, AI outputs should be presented to patients and clinicians at the right time, with appropriate frequency, in a clear, concise, and user-centered manner. Optimal output is likely to minimize alert fatigue, or desensitization to alerts, helping to avoid diagnostic errors that result from inappropriate or inadequate use.
Measurable Processes and Outcomes
Re-evaluation of the PCA diagnostic team structure and function should occur at regular intervals. Patients and clinicians may address this issue during followup visits or via patient- or clinician-initiated portal communications. Another option is to discuss the matter with other members of the healthcare team (e.g., nurses, medical assistants, pharmacists, care navigators).
Patients should have the opportunity to provide anonymous feedback to clinicians and healthcare organizations via patient satisfaction surveys. The team’s effectiveness should be measured based on the shared goals set at the beginning of the diagnostic process. The team should consider the patient, clinician, and AI-related factors that positively or negatively impact the diagnostic process. When possible, team members should share thoughtful, constructive feedback with each other.
Clinicians should be comfortable communicating with health system leadership about AI performance. This feedback may lead to improvement of model performance and usability, as well as iterative updates similar to upgrades for EHRs. Ultimately, we imagine systems in which feedback can also be provided directly to the AI systems as it learns and adapts over time. Similarly, as alluded to above, AI might provide feedback to clinicians about how they might improve delivery of patient care.
The goal is for the diagnostic team’s performance to be augmented by the collective contribution of each team member in a way that allows efficient and effective diagnosis of the patient’s medical problem. Therefore, consistent with the diagnostic process, health systems and clinicians should learn from diagnostic errors, near-misses, and accurate, timely diagnoses that result from the PCA diagnostic teams.