There is a finding buried in the Thomson Reuters Future of Professionals Report 2025 that deserves more attention than it has received. When asked what standard of accuracy they expect from AI outputs, 91% of accounting and professional services professionals said computers should be held to higher standards than humans. Not the same standards. Higher. And 41% of that group said AI-generated work must be fully accurate before it can be used without human review. The instinct driving those numbers is not technophobia or generational resistance. It is the foundational professional reflex of a discipline built on the premise that errors have consequences -- regulatory, financial, legal, reputational. That reflex is exactly right. And, counterintuitively, it is one of the strongest arguments for why accounting firms should be moving faster on AI, not slower.
The Accuracy Problem Is Real, and It Is Also Misunderstood
Accuracy concerns are the single biggest barrier to increased AI investment in the accounting profession, according to the Thomson Reuters data. Fifty percent of professionals cited demonstrable accuracy of AI-powered technologies as a primary obstacle. That is a legitimate concern. Large language models hallucinate. AI-assisted tax research can miss jurisdiction-specific nuance. Automated document drafting can produce outputs that are grammatically correct and substantively wrong. These are not hypothetical failure modes. They are documented, recurring, and consequential in a profession where the margin for error is narrow.
But the accuracy concern, taken alone, is an incomplete picture. The same professionals who cite accuracy as a barrier are already using AI and seeing returns. Fifty-three percent of firms are reporting measurable ROI from AI investment right now. Those two facts coexist because the firms capturing value are not using AI to replace professional judgment. They are using it as a starting point, a drafting assistant, a pattern-recognition tool -- and then applying human review before anything reaches a client or a regulator. That workflow, accuracy concern built in, is not a workaround. It is the correct architecture for AI use in a compliance-driven profession.
Human Review Is Not Friction. It Is the Product.
The accounting profession has always sold judgment, not information. Clients do not hire CPAs because they cannot find tax code on the internet. They hire CPAs because interpreting that code in the context of their specific situation, their risk tolerance, their long-term goals, and the current regulatory environment requires expertise that cannot be Googled. AI does not change that value proposition. If anything, it sharpens it.
When AI handles the information-retrieval and first-draft layer of accounting work, the professional's time shifts toward the interpretive layer -- the part that actually justifies the fee. The human review step is not an overhead added to an otherwise automated process. It is the step where the professional's expertise is most visibly and valuably applied. Firms that understand this are not reluctantly adding review steps to manage AI risk. They are deliberately positioning human judgment as the differentiator that makes their AI-assisted work worth more than their competitors' unassisted work.
This reframing has real implications for how accounting firms position their services and communicate their value to prospective clients. The firms winning on AI are not the ones saying they have automated the most. They are the ones saying their professionals spend more time on the work that matters because AI handles the work that doesn't.
The Overreliance Risk Is Worth Taking Seriously
The Thomson Reuters report identifies a concern on the opposite side of the accuracy debate, one that accountants are right to take seriously. Twenty-four percent of professionals worry that successful AI adoption could lead to overreliance on the technology at the expense of professional skill development. That is not a fringe concern. It is a structurally sound worry about what happens to a profession when its practitioners stop doing the foundational work that built their expertise in the first place.
Junior accountants who never reconcile accounts manually may not develop the intuition that catches the anomaly an automated system flags incorrectly. Tax professionals who always start with an AI-generated memo may lose the research fluency that lets them recognize when the memo's framing is subtly wrong. The risk is not that AI makes accounting professionals lazy. It is that it removes the repetitive work that, incidentally, builds expertise through volume.
The firms navigating this well are the ones treating AI as a layer on top of professional development, not a replacement for it. They are using AI to eliminate genuinely low-value work -- data entry, routine formatting, basic compliance cross-referencing -- while preserving the developmental work that builds judgment over time. Technology integration in accounting education reflects this concern, emphasizing that AI fluency and foundational accounting competency are not in tension when training programs are thoughtfully designed.
What Responsible AI Use Looks Like at the Firm Level
The Thomson Reuters report's four-layer model for AI ROI -- strategy, leadership, operations, and individual -- maps cleanly onto the requirements of responsible AI use in an accounting context. At the strategy level, responsible use means sanctioned tools, documented data security protocols, and clear guidance on which workflows AI is and is not appropriate for. At the leadership level, it means partners and managers who model the right behavior: using AI visibly, reviewing outputs critically, and talking openly about where it works and where it does not.
Operationally, responsible use means workflow design that builds human review into the process as a standard step rather than an optional one. It means pricing models that reflect the value of professional judgment, not just the time spent on any given task. And at the individual level, it means professionals who understand AI well enough to know when to trust it and when to question it -- which requires genuine AI literacy, not just access to AI tools.
The Thomson Reuters data confirms that this kind of structured, accountable approach is what produces results. Professionals with good or expert AI knowledge are 2.8 times as likely to see organizational benefits from AI as those with basic or no knowledge. The investment in genuine AI literacy, across every level of a firm, is not a soft benefit. It is one of the strongest predictors of measurable ROI in the entire dataset.
For accounting firms building their reputation and visibility in a competitive market, the responsible use story is increasingly what sophisticated clients are listening to. Not "we use AI" but "here is how we use it, here is who reviews it, and here is why our work is better because of it."
The Trust Instinct Is an Asset
The 91% of accounting professionals who hold AI to a higher standard of accuracy than human work are not obstacles to the profession's AI future. They are its quality control. A profession that adopted AI uncritically, without the verification instinct built into its DNA, would be less reliable -- and ultimately less valuable. Skepticism is not at odds with progress. It is what makes progress sustainable.
The firms getting this right are the ones that have stopped treating the accuracy concern as a problem to overcome and started treating it as a design principle. Build AI into the workflow. Build human review into the workflow. Capture the efficiency gains from the first step and the quality assurance from the second. Deliver the result to the client as the work of a firm that uses every available tool and trusts none of them blindly.
That is not a compromise position on AI. It is the correct one.
The Profession That Verifies Is the Profession That Endures
The accounting profession's instinct to verify AI outputs before acting on them is not a reluctance to change. It is a professional standard that predates AI by decades and will outlast every tool currently on the market. Firms that build that standard into their AI workflows -- rather than around them -- are the ones producing work that clients trust, regulators respect, and competitors cannot easily replicate.
At Winsome Marketing, we help accounting firms communicate what makes their practice worth choosing, including the judgment, rigor, and strategic thinking that no tool can replace. If your firm has built something worth talking about, we can help you say so.


Writing Team