Male nurse consulting with senior female patient and adult daughter in exam room
(Credit: The Good Brigade / Getty Images)

The American Medical Association on Wednesday called for greater regulatory oversight of insurers’ use of artificial intelligence in reviewing patient claims and prior authorization requests, a critical issue impacting many senior care and living residents. The action follows AMA’s pledge Tuesday to create guidance on the use of AI in healthcare.

AI is partly to blame for increasing Medicare Advantage denials, especially among beneficiaries in need of skilled nursing care, according to a report from STAT News. The report accused insurers of using “unregulated predictive algorithms, under the guise of scientific rigor, to pinpoint the precise moment when they can plausibly cut off payment for an older patient’s treatment.”

Additionally, the federal government has tried to reduce the number of pre-authorization and other denials to address consumer complaints. 

The AMA’s new policy calls for health insurers using AI to implement a “thorough and fair process that is based on clinical criteria and includes reviews by physicians and other healthcare professionals with expertise for the service under review and no incentive to deny care,” according to a statement.

“While the AMA supports automation to speed up the prior authorization process and cut down on the burdensome paperwork required by physicians, the fact remains that prior authorization is overused, costly, inefficient and responsible for patient care delays,” the press release said.

This is the AMA’s latest crackdown on AI’s use in healthcare. The association’s AI guidance announced Tuesday will include recommendations for finding ways to appropriately harness AI’s potential to benefit patients and decrease administrative burden for healthcare administrators and physicians. AMA physicians are developing recommendations on the consequences of relying on AI-generated medical advice and content that could be inaccurate and will give advice for policymakers on steps to protect patients from misinformation. The guidelines will include information on safe and unbiased use of AI technologies including large language models and ChatGPT.