close
close

Association-anemone

Bite-sized brilliance in every update

The new state law will restrict the use of AI by health insurers
asane

The new state law will restrict the use of AI by health insurers

California is going after the algorithms used by insurers to make pre-authorization and other coverage decisions with a new law that will limit how formulas generated by artificial intelligence (AI) are used.

The state will also begin requiring providers to inform consumers when communications with patients are generated by AI.

The laws reflect a growing trend among state lawmakers to a regulate more strictly the use of artificial intelligence in healthcare and other fields in the absence of federal action.

The Law on Doctors Make Decisions (SB 1120) enters into force on January 1. It was supported by dozens of physician organizations and medical groups, the California Hospital Association and several patient advocacy groups. Insurance industry groups opposed the bill.

“As physicians, we recognize that artificial intelligence can be an important tool for improving healthcare, but it should not replace physician decision-making,” California Medical Association (CMA) President Tanya W. Spirtos, MD, said in a statement. in a statement.

The new law ensures that the human element will always determine quality medical treatments for patients, said state Sen. Josh Becker (D-Menlo Park), who sponsored the legislation.

“An algorithm does not fully know and understand a patient’s medical history and needs and can lead to erroneous or biased decisions about medical treatment,” he said.

The law requires handrails

The new law requires that the use of AI or any algorithms be based on the patient’s medical history and the individual’s clinical situation. A decision cannot be based solely on a group data set, cannot replace a physician’s decision, and must be approved by a human physician.

The algorithm must be “fairly and fairly applied,” according to the law.

Algorithms have the potential to be biased, said Sara Murray, MD, vice president and chief health AI officer for UCSF Health. Medscape Medical News. She quoted a recent work in Science which found that decisions based on an algorithm that is widely used by health systems (not insurers) meant that black patients who were sicker than white patients would receive less care.

The law seeks to address the data used to train insurers’ algorithms. “AI tools are only as accurate as the data and algorithm inputs that go into them,” wrote Carmel Shachar, JD, MPH, Amy Killelea, and Sara Gerke in Health business.

“It’s really important to be transparent about the data used as a training set, as well as making sure it matches what population the algorithm is actually being used with,” Shachar, clinical assistant professor of law at Harvard Law School, Cambridge . , Massachusetts, said Medscape Medical News.

Having a human input on AI-generated decisions is important, but “it also has risks,” Murray said. “We can become overly dependent on these tools, and we may also be biased, and we may not be prone to seeing bias, or we may not see bias if an algorithm is giving us biased results,” Murray said.

A ProPublica investigation in 2023 claimed that a Cigna algorithm allowed doctors to quickly reject claims on medical grounds without reviewing patient records. The publication reported that doctors employed by Cigna rejected more than 300,000 claims in a 2-month period, spending an average of 1.2 seconds on each.

California is “reacting to real fears,” she said.

Federal oversight is lacking

While AI used to detect disease and improve diagnosis and treatment is regulated by the US Food and Drug Administration, the AI ​​tools targeted by lawmakers in SB 1120 “are not subject to the same scrutiny and have little independent oversight,” Anna said Yap, MD. , an emergency physician in Sacramento, when she confessed earlier in 2024 in favor of SB 1120 on behalf of the CMA.

The California law “is a good first step,” Shachar said. Algorithms “have been kind of a blind spot in our regulatory system,” she said. The new law “empowers state regulators to act and provides some sort of accountability and requirements for how insurers implement their AI,” she said.

Shachar and his colleagues noted that AI has the potential to streamline and accelerate prior authorization decision-making.

Neil Busis, MD, a neurologist at New York University Grossman School of Medicine in New York City, agreed in a paper in JAMA Neurology. “If it can be trained with the appropriate data, AI can improve prior authorization by reducing administrative burdens, improving efficiency, and improving the overall experience for patients, clinicians, and payers,” he wrote.

In a 2022 reportMcKinsey & Company touted AI’s potential to make prior authorization more efficient. But the authors noted that the AI ​​should be monitored to ensure it did not learn from biased data sets that “could lead to unintended or inappropriate decisions,” particularly for patients of lower socioeconomic status. The report concluded that “highly experienced clinicians will remain the best PA decision makers.”

While the American Medical Association (AMA) has not taken a position on SB 1120, in 2023 the organization adopted a similar policyrequiring AI-based algorithms to use clinical criteria and include peer review by physicians and other healthcare professionals with experience for the service under review and no incentives to deny care.

Marilyn Heine, MD, a member of the AMA Board of Directors, said at the time that even as AI streamlines prior authorization, the volume is increasing. “The bottom line remains the same: We need to reduce the number of things that are subject to prior authorization,” she said.

Shachar and colleagues wrote that AI could drive even more reviews. “We may see a ‘review crack,'” they wrote.

Lawsuits against insurers due to the use of AI

In the absence of regulation, several lawsuits have been filed against insurers for using AI-based algorithms.

Families of two deceased Medicare Advantage beneficiaries who lived in Minnesota sued UnitedHealth in 2023, saying the company’s algorithm had a 90 percent error rate and was employed illegally, according to a CBS News report.

US Senate Permanent Subcommittee on Investigations reported in October that its in-depth investigation found that insurers were using automated prior authorization algorithms to systematically deny post-acute care services to Medicare Advantage enrollees at rates far higher than denials of other types of care for other policyholders.

In March, an individual filed a class action suit against Cigna for using its algorithm to reject claims based on information reported by ProPublica.

Shachar said trials are not a satisfactory way to understand algorithms, in part because “you have to wait for the bad.” The tort system is still formulating how various aspects of the law will apply to AI used by insurers, she added.

More states are likely to follow California’s lead, Shachar said.

An AMA spokesman agreed. “The AMA anticipates future legislative activity in 2025 as we see an increased number of reports of health plans using AI to systematically deny claims,” ​​said RJ Mills of the AMA. Medscape Medical News.

New Rules for AI-Generated Supplier Communications

The governor of California also signed on AB 3030which requires AI-using patient communications to indicate that they were generated by the AI, unless the communication has first been read and reviewed by an authorized or certified healthcare provider.

Murray said UCSF Health is already doing just that.

The health system has been testing the use of AI to help craft doctors’ responses to patients’ messages, with the goal of helping them respond more quickly. The messages have text informing patients that AI was used to help the doctor. It also states that the physician continues to review each communication.

“We just wanted to be very transparent with patients,” Murray said.

AI “is going to be really good for healthcare,” she said. But California’s new laws were needed to provide “guardrails.”

Shachar and Murray reported no relevant financial relationships.

Alicia Ault is a freelance journalist based in Saint Petersburg, Florida whose work has appeared in publications including JAMA and Smithsonian.com. Find X @aliciaault.