healthcare workers sitting around a table
(Credit: shapecharge / Getty Images)

Artificial intelligence, while no doubt an effective tool for patient care and staff management in senior living, has also come under fire for its potential for inaccuracies and other harmful consequences when used incorrectly. On Tuesday, the American Medical Association announced it would create recommendations and guidelines on the use of AI to protect patients and physicians. 

This notable hedging step comes after previous acknowledgement that, while beneficial in many ways, AI must be monitored closely in its various applications within the senior care and living sectors.

Announced at the AMA’s annual meeting in Chicago, the association’s new guidance will offer recommendations for finding ways to appropriately harness AI’s potential to benefit patients and decrease administrative burden for healthcare administrators and physicians.

AMA physicians will develop recommendations on the consequences of relying on AI-generated medical advice and content that could be inaccurate and offer advice for policymakers on steps to protect patients from misinformation. The guidelines will include information on safe and unbiased use of AI technologies including large language models and ChatGPT.

“AI holds the promise of transforming medicine,” said AMA Trustee Alexander Ding in a statement. “As scientists, we want to use our expertise to structure guidelines and guardrails to prevent unintended consequences, such as baking in bias and widening disparities, dissemination of incorrect medical advice, or spread of misinformation or disinformation. We’re trying to look around the corner for our patients to understand the promise and limitations of AI. There is a lot of uncertainty about the direction and regulatory framework for this use of AI that has found its way into the day-to-day practice of medicine.”

In senior living, AI is being used in speech recognition, fall prevention, detecting and treating Alzheimer’s and reducing staff burnout.

But while these new technologies are beneficial, experts have noted AI’s potential for increasing inequities and mistrust in senior living and healthcare, with preexisting algorithm biases being found by researchers in the healthcare system, such as tests for kidney disease. Furthermore, racial bias can be embedded in AI tools for speech and facial recognition, which can exacerbate inequities, and prediction algorithms can have inaccurate results that affect healthcare treatment outcomes, experts note.