States start to take action on AI and insurance
Although traditional Artificial Intelligence has been around for some time, generative AI has taken off like a rocket. So has a model insurance AI framework issued last December, albeit with much less fanfare.
The National Association of Insurance Commissioners — a group of state insurance leaders providing expertise, data and analysis — adopted a model bulletin on insurers’ use of AI systems. The model bulletin reminds insurers to comply with existing state and federal laws, sets expectations for the development and use of AI technology, and describes the processes and documentation expected of insurers using AI. Building on its 2020 document, Principles on AI, the NAIC model aims to bring state uniformity on this emerging issue.
So far, at least 11 states (and Washington, DC) have issued bulletins largely incorporating the NAIC language: Alaska, Connecticut, Illinois, Kentucky, Maryland, Nevada, New Hampshire, Pennsylvania, Rhode Island, Vermont and Washington.
Summary of NAIC model
The bulletin recognizes AI’s role in innovation and identifies key areas where AI is used, like product development, underwriting and claims management. The guidance is divided into two categories:
Regulatory guidance and expectations. Insurers are expected to develop a written AI Systems program, which includes a governance framework and a process for verifying and monitoring predictive models. The AIS program should address all AIS used across the insurance life cycle (e.g. design, underwriting, claim administration, fraud detection) at every stage of the AIS life cycle (from design through retirement). Protection of non-public information is key. Insurers should also focus on third-party AI use. Contract terms should provide for audits and cooperation in the event of an investigation.
Regulatory oversight and examination. The bulletin notes that a state insurance department’s oversight responsibilities include use of AI and offers a preview of the likely substance of those inquiries. Bottom line: proper documentation is vital.
Additional regulatory approaches
Other states have taken steps to address AI beyond the NAIC model. Colorado has been a forerunner. This state passed a 2021 anti-discrimination data insurance law, issued 2023 regulations aimed at life insurers and enacted a comprehensive, AI consumer protection law in May. This month, an Illinois law imposed restrictions on the use of “algorithmic automated processes” in utilization review. Also, this month, New York finalized an insurance bulletin highlighting concerns with unfair or unlawful AI discrimination. California published a 2022 bulletin warning insurers about use of AI in light of allegations of racial bias and unfair discrimination. Utah passed a comprehensive AI consumer protection law in March. Nationally, President Biden issued an AI Executive Order last October.
While these laws focus on fully insured plans, sponsors of self-funded plans can take many of the recommendations and insights to heart. Like other issues, AI has fiduciary risk implications.
Possible next steps
In 1996, HIPAA’s administrative simplification provisions aimed to improve “the efficiency and effectiveness of the health care system.” Since then, many have seen automation as the future cure for the ailing US healthcare system. AI is the latest version of automation, but careful, responsible restraint is warranted. In other words, don’t apply a parking brake but certainly tap the brakes before putting the pedal to the metal. Prudent plan sponsors might consider these steps:
- Discuss with insurers and other benefit vendors how AI is used in their processes, reviewing what controls are in place to ensure compliance with existing law. Request a copy of the AIS program, if available.
- Conduct a separate AI risk assessment in all internal and external benefit processes and systems; the National Institute of Standards and Technology published a risk management framework in April.
- Consider the impact on AI of other state and federal laws and regulations (like the final ACA § 1557 rules).
- Gauge participant experience with AI and make adjustments accordingly.
- Continue to monitor legislative and regulatory developments, especially related to the model bulletin.