According to a July 11 State Department of Financial Services memo, insurers must verify that their use of consumer data and artificial intelligence is not discriminatory, and if outside tools make biased underwriting or pricing decisions. He will be held responsible.
Insurers should not incorporate external data or AI systems unless a comprehensive evaluation shows that their use does not “unfairly discriminate between similar individuals,” the state Department of Financial Services said. wrote in a memo released last week. If the assessment shows a disproportionate impact on a demographic group, insurers should try to find less biased alternatives, the document says. This circular letter — a government document that outlines expectations — is relevant to the approximately 1,960 companies that DFS supervises, including life, property and casualty insurers, as well as some health insurers.
DFS Superintendent Adrian Harris said the guidance will ensure that “the implementation of AI in insurance does not perpetuate or exacerbate systemic biases resulting from illegal or unfair discrimination, while protecting market stability. “
Consumer rights group Consumer Reports said that while traditional analytics methods can be discriminatory, the risks increase when companies use confusing formulas to analyze large amounts of data and automate decisions. AI can be fed information in the form of partial, inaccurate or unrepresentative data—or inefficient, biased behavior.
For example, life insurers accept taking into account consumers' body mass index – the ratio of body weight to height – but the American Medical Association advises against relying solely on BMI because it affects almost exclusively white patients. was prepared using data from Chuck Bell, director of advocacy programs at Consumer Reports, said insurers may now need to reevaluate how they factor in BMI.
“If you're using a data variable and there are questions about whether it has a discriminatory effect, it's important for the insurance company to consider that … and do proper testing and evaluation of the algorithm,” Bell said. said
Eric Linzer, president and CEO of the New York Health Plan Association, which represents health insurance companies, said DFS's vision is appropriately tailored. AI is not typically used in underwriting and pricing health plans, Lenzer said, but it is used to assess fraudulent claims and identify gaps in care.
“We certainly support a flexible framework that allows projects to develop appropriate tools … that seek to enhance, not replace, human decision-making and expertise,” Linzer said. “AI has the potential to make healthcare systems work better and more cost-effectively.”