New AI-specific insurance exclusions underscore risks associated with generative artificial intelligence

New AI-specific insurance exclusions underscore risks associated with generative artificial intelligence

Anita Byer

Generative Artificial Intelligence (GAI) is a type of artificial intelligence that creates new content (text, images, audio, video) in response to basic user prompts. Many organizations are now exploring ways to leverage this transformative technology to advance operational and business objectives. But despite its seemingly limitless upside, the operational use of GAI introduces new, potentially significant liability exposures. The consequences of failing to control organizational risks associated with GAI are underscored by the fact that insurance companies are beginning to include AI-specific coverage exclusions in their commercial general liability policies.

The operational use of GAI can expand existing and generate new liability exposures. According to Verisk, a data and analytics company, these exposures may include the following.

Copyright infringement and invasion of privacy. If the datasets used to train a GAI model contain copyrighted or sensitive information (many do), such information may end up in the new content generated by GAI. This could result in claims of copyright infringement and privacy invasions.

Professional errors & omissions. AI models are increasingly being trained to dispense expert-level professional advice, but their output continues to include mistakes or “hallucinations.” If the professional guidance provided by an AI chatbot is flawed or incorrect, the organization may be exposed to claims of professional malpractice.

Products liability. GAI is increasingly being used in design and product manufacturing applications. With its current limitations, GAI may create or produce defective or poorly designed products capable of causing serious damage or injury to those who use or are otherwise exposed to them.

Bias and discrimination. AI tools have been known to recreate patterns of bias and discrimination that are present in their training datasets. GAI output that includes biased or discriminatory components may result in claims of unlawful discrimination.

Compliance and regulatory risks. GAI is increasingly being used to assist with critical compliance and regulatory functions. But as we know, GAI’s output can be flawed or incorrect. If, for example, a public company submits a false or misleading document created by GAI to the SEC, the lack of due diligence could expose the organization to D&O and professional liability claims.

Importantly, the Insurance Services Office (ISO), an advisory organization that provides standard policy forms and rating information to insurers, recently introduced Generative Artificial Intelligence exclusions for commercial general liability policies. Under these exclusions, claims for bodily injury, property damage, and personal advertising injury that arise out of GAI are not covered by insurance.

These exclusions may present a big problem for many organizations because they apply broadly to all claims arising out of “generative artificial intelligence,” which is defined as a machine-based learning system or model that is trained on data with the ability to create content or responses, including but not limited to text, images, audio, video or code. Many anticipate that general liability policies will increasingly include AI-specific coverage exclusions going forward. Organizations using generative artificial intelligence operationally must ensure that their insurance covers the new and expanded liability exposures created by GAI.