3 min read

Structural Data Issues Undermine AI Adoption in Insurance

Structural Data Issues Undermine AI Adoption in Insurance
Structural Data Issues Undermine AI Adoption in Insurance
5:27

The ambition is nearly unanimous. The execution is not.

A new report from financial operations firm Autorek — drawing on surveys of 250 insurance managers across the UK and US — puts hard numbers on a problem the industry has been circling for years: structural data dysfunction is the primary reason AI adoption in insurance remains shallow. The report, Insurance Operations & Financial Transformation 2026, isn't an AI forecast. It's a diagnostic.

And the diagnosis is unflattering.

The Numbers Behind the Gap

The headline contrast is stark: 82% of firms surveyed expect AI to come to dominate the sector. Only 14% have fully integrated it into their operations. Six percent report no AI use at all.

That gap doesn't exist because insurance companies lack interest or investment. It exists because the foundational layer — clean, governed, accessible data — isn't there.

The operational picture the report paints is one of compounding inefficiencies. Fourteen percent of operational budgets go toward correcting manual errors. Nearly a quarter of respondents link the complexity of reconciliation directly to rising costs. A similar share ties inefficiencies to governance and audit risk. Close to half of the firms are running settlement cycles longer than 60 days.

Firms surveyed manage an average of 17 distinct data sources — a number that increases further after mergers and acquisitions, which are common in the sector. That fragmentation doesn't just slow operations. It actively constrains what AI can do, because AI systems built on inconsistent, siloed data inherit all of the chaos those systems contain.

Why This Is a Data Problem, Not a Technology Problem

The report is careful to distinguish between AI readiness and AI capability. The tools exist. The infrastructure to deploy them effectively does not, in many cases.

Legacy system integration, fragmented data, and limited internal expertise are identified as the three primary barriers. Of these, data fragmentation carries the most weight — because it undermines everything else. Governance frameworks can't function coherently when the data estate they're meant to govern is piecemeal. Automation, whether AI-driven or rules-based, doesn't scale cleanly on a fractured architecture. Costs rise rather than fall.

This is the core tension the report identifies: reconciliation processes are essentially structured, bounded workflows — exactly the kind of domain where AI performs well and can demonstrate value quickly. But those workflows sit on top of data systems that are anything but structured. The mismatch is where the promised efficiency gains stall.

The authors note that this situation persists despite the causes being well documented in prior publications. Awareness of the problem has not, on its own, resolved it. The pace of resolution is constrained by legacy technology and the operational overhead of running a live business while rebuilding its foundations.

The Cost of Waiting

Transaction volumes in the sector are projected to rise roughly 29% over the next two years. Without structural changes to how data is managed and processed, the report suggests operational expenditure will scale alongside volume — which eliminates much of the efficiency argument for AI in the first place.

The firms that resolve data fragmentation first, according to the report's authors, will widen the performance gap over those that don't. That framing is worth taking seriously. This isn't a situation where late movers can simply adopt better tools once they're proven. The advantage compounds in favor of whoever builds the clean data layer first, because that layer is what makes automation — and eventually AI — economically viable at scale.

The report points to cloud-based AI platforms as a potential accelerant, particularly for firms that can't undertake full legacy modernization in the near term. The argument is that cloud infrastructure can help structure fragmented data sources in ways that in-house systems often cannot, without requiring a wholesale replacement of existing architecture.

A Sector-Specific Story With a Universal Lesson

Insurance is a particular case — heavily regulated, operationally complex, built on decades of legacy infrastructure. But the dynamic the Autorek report describes is not unique to the industry.

Any organization expecting AI to resolve operational inefficiency when that inefficiency is rooted in data fragmentation and poor governance is working with an incomplete theory of change. AI amplifies the quality of the data and processes it operates on. It does not replace the work of building them.

The insurance sector is simply further along in confronting that reality, because the gap between its AI expectations and its AI execution is now measurable in survey data.

For organizations working through their own version of this — whether in financial services, retail, or any data-intensive operation — the sequencing question matters: data standardization and governance come before scalable automation. That order isn't a constraint. It's the prerequisite.

If you're assessing where your organization sits in that sequence, Winsome Marketing's AI strategy team can help map the gap. The conversation starts at winsomemarketing.com.

You Turned Off Training—But Thumbs-Up Still Sends Your Data

You Turned Off Training—But Thumbs-Up Still Sends Your Data

Consumer AI privacy is a setting. Your feedback is an action. And here's the uncomfortable truth: actions should override settings when users...

Read More
AI Engineering Changes That Matter for Marketing Teams

AI Engineering Changes That Matter for Marketing Teams

Engineering teams are experiencing a fundamental shift in how they build and deploy software, and if you're in marketing, you need to pay attention....

Read More
AI's Overthinking Problem Is Real — And It's Costing You More Than You Know

AI's Overthinking Problem Is Real — And It's Costing You More Than You Know

Reasoning models frequently arrive at the correct answer, then keep talking anyway. A new ByteDance study quantified exactly how bad this is: in over...

Read More