C L O S L E R
Moving Us Closer To Osler
A Miller Coulson Academy of Clinical Excellence Initiative

AI And Humanistic Care

Takeaway

In a clinical world that is increasingly algorithmic, what AI can NOT do is truly understand human emotions. Computers will never appreciate the joy and privilege of connecting with patients. 

“Please don’t confuse your Google search with my medical degree.” This was a snarky quotation I saw on a coffee mug a few years ago. It was relatable because I know how challenging it can be to find reliable medical information on the internet, yet it’s common for patients and families (including mine) to look up a set of symptoms online and become fearful about having some awful diagnosis that a good clinician could quickly exclude with some strategic questions or a simple examination.  

 

Times are changing fast. Artificial intelligence (AI) is everywhere and is no longer a misnomer. AI actually can seem intelligent, creative and—thanks to human trainers—even empathic and personable. Nowadays, a customer service chatbot is more polite than a harried doctor: “I’m sorry you had to wait so long to speak with a representative, but I hope that I can help you solve your problem! I know your time is valuable. Let’s get to the bottom of the issue with your new printer. Can you tell me a little more about what’s going on?” In the clinical setting, computerized calculators and decision support tools are everywhere, and the impact of AI on the practice of medicine will only increase with time. Should we, as doctors, embrace this revolution or feel threatened? Will we become anachronistic or obsolete? How will our patients think about us and the work we do? I, for one, welcome this new era. 

 

First, the quality of healthcare will improve. Doctors are human. We make mistakes. We have biases. We have bad days. We make incorrect assumptions. We get hangry. New drugs, procedures, and treatment paradigms emerge almost hourly. And we’re usually short on time. These realities aren’t a good recipe for consistent and high-quality patient care. Yet our work is too important to accept preventable human error as inevitable. As decision-support becomes more sophisticated, physicians will become better diagnosticians, more consistent prescribers, and more thorough health maintenance experts.  

  

Second, AI can help patients and their families be more effective and empowered members of the healthcare team. Patients now have the tools to become knowledgeable about their bodies, illnesses, and the therapies available. A vigilant, curious, engaged patient now knows the right questions to ask, which symptoms or side effects to look out for, what indicates that a treatment is working, and the limitations of what healthcare can deliver. I’m grateful when my patients and their families have educated themselves because that leads to better patient-centered decisions, a less hierarchical dynamic between doctor and patient, and (hopefully) a more successful treatment plan.  

  

Finally, in a clinical world that is increasingly algorithmic, what AI cannot do will be obvious to patients and their families. A robot can’t meet a daughter’s scared eyes or sit on the bed to comfort a patient with a new cancer diagnosis. A computer will never be able to experience sorrow or pain, or feel the joy of a successful surgery, a cured illness, or a healthy newborn baby. Physicians understand these emotions and have the privilege to share them with patients. These human aspects of being a doctor existed long before computers and can only be diminished by computers if we forget who we are. 

 

 

 

 

 

 

 

 

This piece expresses the views solely of the author. It does not necessarily represent the views of any organization, including Johns Hopkins Medicine.

AI