Patients interact with their primary care provider infrequently or periodically at best, which results in infrequent measurements that provide only a snapshot of an individual’s health. Sample rate aside, these measurements may be plagued with bias or error (e.g., white coat syndrome or missing context).20 In light of these limitations, technology for measuring physiological signals has been developed for at-home use (e.g., Holter monitors) and is used for diagnosing a range of health issues.
AI promises to help process, organize, and transform these data into actionable knowledge.21 We have seen examples of algorithms that can automatically process hundreds of thousands of heartbeats in seconds22,23 and others that can automatically track falls.24 As wearables and sensors become more common in daily living, they will provide a more complete picture of an individual’s daily activities and, in turn, their health.
However, data from wearables will fail to have impact if they are not effectively integrated into clinical care. Relevant information must be extracted from signals and presented to clinicians (and patients) in a way that is actionable. The field of human-AI (and more specifically clinician-AI) interaction is relatively new.
Numerous research questions need to be addressed and implementation challenges need to be overcome. For example, given an algorithm for estimating atrial fibrillation burden based on data collected from wearables, how often should this information be conveyed to a clinician? The low frequency of primary care visits might defeat the purpose of continuous monitoring. Instead, we might consider a scenario in which the patient is monitored continuously and the clinician alerted when the AI has identified new actionable knowledge.
Once the AI alerts the clinician, how should these data be presented? As new data streams come online and are integrated into the electronic health record (EHR), we will also require education around what actions might be considered appropriate based on the data. For example, an AI system is designed to diagnose atrial fibrillation and make recommendations about anticoagulation based on its calculation of the CHA2DS2-VASc score. This score is a clinical prediction rule (CPR) that estimates the risk of stroke in patients with atrial fibrillation. The system also uses the HAS-BLED score, a CPR that estimates a patient’s risk of major bleeding. Use of this system could result in significant practice variation depending on clinicians’ comfort with the algorithm and their subsequent uptake of the algorithms’ recommendation.25
While promising, the clinician-AI dyad relies on both the strength of the clinician-patient relationship and the patient-AI relationship. If clinicians do not have a strong rapport with patients, it will be difficult to identify AI “blind spots” (i.e., settings in which the AI fails). Moreover, it may be difficult to convince patients to follow through with recommended diagnostic plans that come directly from AI. If the patient does not have a strong relationship with AI and does not engage with the AI (e.g., does not charge/use the wearable sensors), then AI may offer limited utility.
Even so, AI has the potential to recognize and account for limitations in the clinician-patient relationship (e.g., recent work in using AI to coach clinicians on their use of language and in motivational interviewing).26,27 In addition, if the clinician-patient relationship is strong but the patient-AI relationship is weak, the AI might lean on the clinician-patient relationship to motivate change in patient behavior toward AI. For example, if a patient is unresponsive to alerts or notifications from a wearable AI-based device, AI might ask the clinician to encourage the patient to increase responsiveness or to take the action the AI recommends.