4 min read

Thomson Reuters CEO Steve Hasker on Building AI

Thomson Reuters CEO Steve Hasker on Building AI
Thomson Reuters CEO Steve Hasker on Building AI
7:41

At the AI Agent Conference in New York, Steve Hasker, President and CEO of Thomson Reuters, made the argument that I think every accounting firm, law firm, and professional services organization needs to hear directly. His session was titled "When Being Almost Right Isn't Good Enough" — and that framing is precise. In regulated professional work, a confident wrong answer isn't a minor inconvenience. It's a professional liability event.

The session was a fireside chat format with Vanessa Liu of Appen, covering how Thomson Reuters has approached AI transformation across legal, tax, and accounting workflows — and why the path they've taken looks fundamentally different from a generic LLM deployment.

The System Around the Model Is Where the Real AI Race Is Being Run

Hasker's central argument appeared early and anchored everything that followed. "The system around the model is where the real AI race is being run." Not the model itself. Not the prompting strategy. The infrastructure of trusted data, domain-specific workflows, human verification, and accountability that determines whether a professional can actually rely on what the system produces.

This is the reframe that matters most for professional services. The AI tools being evaluated aren't primarily competing on model capability. They're competing on whether the outputs are authoritative, traceable, and accountable to professional standards. Speed is not the differentiator. Defensibility is.

General-Purpose Models Are Unsafe for Professional Work

Hasker was direct about a failure mode playing out across the legal industry right now. Law firms deploying generic AI tools have hallucinated legal citations, produced incorrect filings, and created professional risk for the attorneys who signed off on the outputs. "The stakes are much higher. You cannot rely on general-purpose models. The hallucination problem matters enormously."

This isn't a criticism of the models themselves. It's a description of what they are and what they require to be safe in high-stakes professional contexts. A general-purpose LLM generating plausible-sounding legal citations has no mechanism for knowing whether those citations actually exist. A domain-constrained system built on verified legal content, with retrieval grounded in authoritative sources, does. Those are not equivalent tools for professional work, even if their outputs look similar in a demo.

New call-to-action

Proprietary Expert Content Is the Moat

Thomson Reuters' competitive position in professional AI rests on something that can't be quickly replicated: decades of trusted professional content, validated by thousands of legal, tax, and accounting experts, across every major jurisdiction and practice area.

"We have 20 years of user content. We have thousands of experts. We have content no one else has."

The moat is not the model. The moat is the proprietary knowledge corpus that grounds the model's outputs in verified professional reality. Any organization can access frontier models. Very few have the structured expert content required to make those models safe for regulated professional workflows. That content advantage compounds over time as the system learns from professional usage and expert feedback.

Human Verification Is Baked Into the Architecture

The systems Thomson Reuters builds are designed around a principle that Hasker returned to repeatedly: professionals must be able to inspect outputs, validate recommendations, and understand exactly where results came from. "We require verification and validation. Professionals need transparency. You must understand exactly where outputs came from."

This is explainability as a design requirement, not an afterthought. In legal, tax, and accounting work, a professional who can't explain or defend how they reached a conclusion is professionally exposed. An AI system that produces outputs without provenance — without the ability to trace the answer back to its sources — creates exactly that exposure. The architecture has to solve for traceability from the start.

What AI Actually Does in Legal, Tax, and Accounting Workflows

Hasker was specific about what the systems do and what they don't do. On the do side: draft motions, conduct legal research, compare precedents, analyze years of tax returns for patterns and anomalies, flag optimization opportunities, automate compliance checking. "Compare the last ten years of your tax returns. The AI highlights opportunities. The mechanical work gets automated."

On the don't side: make judgment calls, take professional accountability, replace the human relationship with the client. "The professional remains accountable. Humans stay involved."

The design intent is clear and worth stating plainly: AI handles mechanical analysis, repetitive research, and synthesis. Humans handle judgment, interpretation, and accountability. Tax and accounting in particular shift from once-a-year filing processes toward continuous advisory systems — AI surfacing opportunities and anomalies year-round, professionals acting on them.

Regulated Professions Are Facing a Talent Shortage That AI Has to Help Solve

Part of what makes AI adoption in legal, tax, and accounting structurally different from other industries: these professions are facing genuine talent shortages alongside increasing workflow complexity. "The legal profession has a talent shortage. Professional work is increasing dramatically."

AI in this context isn't competing with available labor. It's augmenting a workforce that doesn't have enough capacity for the work that exists. That changes the adoption dynamic considerably. The organizations that deploy professional AI well won't just be more efficient — they'll be able to do work that their current capacity wouldn't otherwise support.

Governance Is What Makes Enterprise AI Safe to Deploy

Hasker's opening section on enterprise governance produced one of the more memorable quotes of the conference. Describing a conversation with a CIO: "I want every developer using AI, but I don't want production chaos. You guys are going to be my control point."

That's the enterprise AI governance requirement stated plainly. Leadership wants broad adoption. They also want controlled deployment — vetted agents, approved infrastructure, centralized oversight, audit systems. Agent registries, approved model lists, and centralized control layers are becoming standard requirements for large enterprise AI deployments, not optional governance add-ons. "The enterprises require governance to make it work."

The Professional Role Is Being Reshaped, Not Eliminated

Hasker's philosophical position on the future of professional work was consistent with the broader conference theme but more specifically grounded in regulated industries. "The repetitive work will disappear. Professionals focus more on advisory work."

The lawyers, accountants, and tax professionals who succeed in an AI-augmented environment will be the ones who lean into what AI can't replace: client judgment, strategic interpretation, relationship management, and professional accountability. The mechanical layer of their work — research, document review, compliance checking, pattern analysis — moves to AI systems. The judgment layer becomes more valuable, not less.

"We saw a fundamental transformation. This changes the profession itself."


Steve Hasker presented at the AI Agent Conference 2026 in New York. He is President and CEO of Thomson Reuters.