C L O S L E R
Moving Us Closer To Osler
A Miller Coulson Academy of Clinical Excellence Initiative

Artificial Intelligence: Artificial? Intelligent? Neither or Both?

Takeaway

AI can complement the work of mental health professionals, in addition to performing clerical tasks and analyzing big data for insightful trends.

I was partially wrong. The care concierge had just finished telling me about an employee in his mid-50s who had engaged with Bea, a mental health chatbot and part of The Johns Hopkins Balance™ program. Balance is an employer-purchased, Hopkins-infused mental health support offering. The employee had never felt comfortable speaking up about his mental health concerns, but the anonymity and ease of speaking with a chatbot got him started. He went on to call our care concierge and ultimately connected with a long-term psychotherapist. All good things, but I was also partially right . . . chatbots aren’t the panacea for mental health and well-being. 

 

Artificial intelligence (AI) and mental health 

AI is rapidly changing the landscape of mental health. It offers a range of possibilities, from improving diagnostic accuracy to providing real-time help to patients. AI can detect patterns and trends in data that are difficult for humans to identiry. It can help predict who is at risk for developing mental health issues and provide targeted interventions. AI-powered chatbots and virtual assistants can offer 24/7 support, improving access to mental health resources. However, concerns around the potential bias in AI algorithms and privacy issues must be addressed. The use of AI in mental health must be accompanied by ethical guidelines to ensure that individuals’ mental health data is protected and that AI tools are used responsibly. 

 

Who said what to whom? 

The first paragraph you just read was all me, human errors and all. The second paragraph was created by OpenAI’s free ChatGPT in response to the query: “Give me 100 words on AI and mental health.” If you can ignore the irony of verifying that you’re a human, and put aside any privacy concerns, it’s worth signing up for ChatGPT. Worst case scenario it fills 15 minutes of your already busy day with surprisingly nuanced responses. Best case scenario you figure out a way it can make your life easier (AI-derived prior authorization responses at your own risk.) 

 

Understand how far we’ve come 

Today’s AI is much more than Microsoft Word’s Clippy, your bank’s virtual assistant, or even Siri and Alexa. AI can mimic human interaction. A recent study found that blinded healthcare professionals preferred AI responses to patient questions asked on a public social media forum, both in terms of quality and empathy. Does this mean we can use AI to respond to the ever-growing number of asynchronous messages sent to us by patients through the electronic medical record? The answer is likely yes and no. 

 

The who, what, when, where, and why of AI use 

Perhaps the most important takeaway is that AI will play an increasing role in healthcare. AI’s use in healthcare isn’t new, but it is increasingly sophisticated and accessible. And perhaps that’s what has sparked the recent debate and interest. Use AI to optimize an outpatient surgery schedule? Great! AI to analyze big data for insightful trends? Even better! But completing a psychotherapy session using AI-driven natural language processing and machine learning algorithms without the patient knowing? And do this because there aren’t enough mental health providers? Now you’re starting to raise some eyebrows. My suggestion? Stay educated. Stay alert. And stay human. 

 

 

 

 

 

This piece expresses the views solely of the author. It does not necessarily represent the views of any organization, including Johns Hopkins Medicine.