}
Table of content

Bridging Data Divides: AI as a New Paradigm for Unstructured Data

Published on Dec 31, 2025 in Pricing • 3-minute read
Sergey Filimonov
Cofounder at Matrisk AI

The explosive growth of large language models (LLMs) has outpaced even the most optimistic predictions. Yet, despite their widespread adoption, much of the public discourse remains fixated on high-profile consumer applications—such as ChatGPT or AI-driven customer service—leaving actuaries to question how these technologies can be practically applied to insurance and risk assessment. While actuaries have long relied on traditional machine learning approaches for pattern recognition and risk modeling, LLMs offer unique capabilities that may be particularly well suited to addressing long-standing challenges in actuarial science.

LLMs offer powerful solutions for unifying disparate data sources by synthesizing information that was previously too fragmented or unstructured for effective analysis. This is particularly valuable in actuarial work, where sparse data present a fundamental challenge. Unlike widely adopted models—like generalized linear models (GLMs), regressions, or decision trees—which rely on structured numerical data, LLMs excel at processing unstructured inputs such as policy documents, claims narratives, or adjuster notes.

By extracting key details from these text-heavy sources, LLMs can enrich traditional risk models in ways that were previously impractical. For example, they might identify nuanced policy terms or locate subtle risk factors hidden within adjuster notes, then convert those findings into features compatible with established techniques such as GLMs or decision trees. This could provide actuaries with a more holistic view of risk drivers, potentially leading to more accurate risk assessments and pricing strategies.

However, since LLMs are inherently stochastic and their inner workings often opaque, incorporating them presents its own set of unique challenges. The “black box” nature of these models contrasts sharply with the transparency traditionally valued in actuarial work. Consequently, effective use of LLMs requires a nuanced understanding of both their capabilities and limitations; additionally, new methodologies are needed to ensure reliability.

Download our white paper: "Applying Machine Learning to Actuarial and Pricing Workflows"

Download our webinar replay: ""

About the author

Sergey Filimonov, Cofounder at Matrisk AI

A regular speaker at CAS conferences, Sergey translates AI into actuarial practice for carriers nationwide. His essays on AI have been reshared well beyond insurance—by Google's executives, the founder of Llama-Index, and other industry leaders—highlighting his standing as a cross-sector authority on AI. Before Matrisk, he scaled production ML models across Honda’s U.S. operations.