Thomson Reuters just announced the future of legal practice, complete with AI agents that "plan, reason, act, and even react" to complete complex legal workflows. Their new CoCounsel for tax, audit, and accounting professionals can automate everything from client file reviews to memo drafting, while upcoming legal applications promise intelligent contract drafting and compliance risk assessments. It's an impressive technological achievement that raises an equally impressive question: where exactly does legal interpretation end and algorithmic automation begin?
Don't get us wrong—this represents genuine innovation. Thomson Reuters has spent over a year developing agentic AI systems that go far beyond simple chatbot responses, backed by 20+ billion documents, 15+ petabytes of data, and custom training from 4,500 subject matter experts. As David Wong, Thomson Reuters' Chief Product Officer, explains, "We're delivering systems that don't just assist but operate inside the workflows professionals use every day." The early results are compelling: BLISS 1041 reduced their multi-jurisdictional residency review process from half a week to under an hour per jurisdiction.
But here's what gives us pause: legal practice isn't just workflow optimization.
The most challenging aspect of legal work isn't processing information—it's interpreting ambiguous language in novel contexts. Recent research shows that state-of-the-art LLMs exhibit considerable output instability when answering legal questions, with models yielding divergent decisions even under controlled settings. Even OpenAI's latest reasoning model, o1, achieved only 77.6% accuracy on LegalBench, a legal reasoning benchmark, leaving significant room for improvement in complex rule interpretation.
Legal documents and statutes often contain ambiguities, contradictions, and nuances that require human judgment to navigate effectively. As one recent analysis noted, "Legal reasoning strives to reduce ambiguity and uncertainty within the legal system, but often operates in the face of inherent ambiguity. Legal rules can be vague, and facts can be complex." Thomson Reuters' agentic systems promise to handle this complexity, but legal reasoning fundamentally differs from algorithmic processing in ways that matter.
When a Virginia appeals court recently reaffirmed that "policy language is only ambiguous where there are competing interpretations that are equally possible, given the text and context of the provision," they highlighted something crucial: legal interpretation requires contextual judgment that goes beyond pattern recognition. How does an AI agent distinguish between genuine ambiguity and apparent ambiguity that resolves through proper contextual analysis?
Thomson Reuters emphasizes that "human expertise remains in the loop to guide judgment, validate outputs, and make final decisions." This sounds reassuring until you consider the practical dynamics of AI-assisted legal work. Research on agentic AI reveals several concerning risks: potential loss of control as systems act unpredictably, the possibility of misalignment with human values, and over-reliance on autonomous systems that could result in operational disruptions.
The integration of agentic AI into legal contract review, modification, and negotiation raises important implications that expose businesses and consumers to greater risks of automating legally binding documents without human supervision and nuanced judgment. When AI agents can "plan, execute, and adapt across tools in real time," how much human oversight is realistically possible? Will lawyers become supervisors of AI work they didn't directly create, responsible for outputs they can't fully trace?
Early customer testimonials focus on efficiency gains—converting week-long processes into hour-long ones. But efficiency in legal work isn't just about speed; it's about thoroughness, context-awareness, and the ability to spot issues that don't fit established patterns. Can agentic AI distinguish between a contract clause that's genuinely routine and one that appears routine but creates novel risk in a specific context?
Legal reasoning often involves what experts call "fuzzy reasoning"—handling uncertainty and imprecision in scenarios where data can be ambiguous or incomplete. Unlike deductive reasoning, which follows clear if-then rules, legal interpretation frequently requires weighing competing principles, considering policy implications, and making judgment calls about legislative intent or contractual purpose.
Thomson Reuters' agentic systems are "refined by legal and tax, audit, and accounting experts to reason in alignment with professional standards and best practices." But professional standards themselves evolve through interpretation and application. How does an AI system handle cases where professional standards conflict, or where emerging situations don't clearly fit existing frameworks? Recent studies show that AI systems struggle with contextual understanding and often have difficulty interpreting nuanced, ambiguous, or context-specific scenarios.
The computational demands are also non-trivial. Complex reasoning tasks require significant processing power and time, potentially creating scalability concerns as these systems handle increasing workloads across Thomson Reuters' 500,000+ customers.
We're not arguing against AI in legal practice—Thomson Reuters' approach is thoughtful, expert-informed, and addresses real workflow inefficiencies. The integration across Westlaw, Practical Law, and CoCounsel creates a comprehensive ecosystem that could genuinely improve legal service delivery. The emphasis on transparency, explainable outputs, and domain expertise represents responsible AI development.
But the legal profession's core value lies in navigating uncertainty, not just processing information. When agentic AI systems begin making complex, multi-step decisions in "high-stakes environments where accuracy and trust are non-negotiable," the stakes extend beyond efficiency gains to questions of professional responsibility and client protection.
As these systems prepare to launch across legal, risk, and compliance domains this summer, the legal profession faces a critical question: can AI agents truly "reason" through legal problems, or are they sophisticated pattern-matching systems that work until they encounter something genuinely novel?
The answer will determine whether Thomson Reuters is launching the future of legal practice or creating the most elegant automation trap the profession has ever seen. Given the interpretive nature of legal work and the current limitations of AI reasoning, we suspect the truth lies somewhere in between—which is exactly where lawyers have always done their best work.