Orchid purple (lavender) ribbon awareness in doctor’s hand for (all kinds cancers), Testicular Cancer awareness, Craniosynostosis, Epilepsy, Hodgkin's lymphoma, National Cancer Prevention Month
(Credit: Chinnapong / Getty Images)

As large language models become more developed, artificial intelligence systems are able to generate language for healthcare applications that appear empathetic.

But researchers and healthtech experts are still very uncomfortable with the idea of deploying AI for some of the more sensitive, serious health issues seniors may face.

Two recent studies evaluated AI’s use in two very important care contexts: mental health and cancer care. 

Even mild forms of cancer can be a scary issue for seniors and require significant treatment regimens. Roughly 10% of all nursing home residents will get a cancer diagnosis during their stay.

With mental health, statistics vary, due to issues of self-reporting, but reliable reports indicate a majority, or even 75%, of seniors in long-term care facilities have some mental health issues, leading to some experts calling the senior living situation a “mental health crisis.”

While the idea of having AI solutions to cover care or communication gaps sounds appealing — after all, a smartphone app can be available 24/7 — using these tools as anything more than bridges to actual human care would be problematic, researchers said.

“When emotional AI is deployed for mental health care or companionship, it risks creating a superficial semblance of empathy that lacks the depth and authenticity of human connections,” digital tech expert A.T. Kingsmith wrote in a report for the online magazine The Conversation.

Even current AI models that mimic empathy gloss over cultural sensitivity issues, Kingsmith wrote, adding that while it’s easy to say a provider might draw the line at limiting AI tools to a short-term stopgap measure, it’s also a slippery slope to simply assuming AI can substitute for human caregivers generally. 

Kingsmith noted he wasn’t against AI but was hoping to promote its “ethical development” and usage cases.

In cancer care, a recent survey of oncologists found that many were wary of incorporating AI-based decision models. Three quarters of survey participants said that cancer experts needed to protect patients from biased or inaccurate AI models, but only 28% said they felt confident in identifying such inaccuracies.

This dovetails with a similar study from last year that found that rather than flagging inaccuracies, clinicians are more likely to follow along with an AI’s recommendation even when it’s incorrect or biased.

“Most [survey respondents] agreed that patients should consent to [AI] use,” the study authors wrote .”These findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions as well as decisional responsibility when problems related to AI use arise.”

The findings on AI in cancer care were published last week in the journal JAMA Network Open.