The CEO's AI Waffle - Duolingo's Luis von Ahn
Here's a fun drinking game for tech leadership theater: take a shot every time a CEO announces an "AI-first transformation" while simultaneously...
4 min read
Writing Team
:
Dec 12, 2025 8:00:01 AM
New state AI employment laws take effect January 1st in Illinois and Texas, following similar legislation already active in New York City and California, with Colorado's law arriving in June 2026. Meanwhile, federal policy under President Trump pushes AI deregulation to maintain global competitiveness. The result: employers caught between contradictory regulatory frameworks with no clear path forward.
"There is, to some degree, some tension between the messaging from a federal perspective and what we're seeing on a state-by-state basis," Jenn Betts of Ogletree Deakins told HR Dive.
That's diplomatic phrasing for "federal and state governments want opposite things and employers will bear the consequences."
State requirements vary wildly in scope and specificity. New York City's AI hiring law requires bias audits and disclosure. California's Fair Employment and Housing Act amendment covers hiring, promotions, and training. Illinois mandates transparency about AI use in employment decisions. Colorado requires risk assessments and opt-out provisions. Texas's Responsible Artificial Intelligence Governance Act largely exempts AI from regulations in employment contexts.
Wait—Texas exempts AI while other states regulate it? Yes. TRAIGA only requires AI not be "intended to cause physical harm or abet criminal activity." It explicitly states that "disparate impact is not sufficient by itself to demonstrate an intent to discriminate"—directly contradicting 50+ years of federal and state anti-discrimination law that holds adverse impact creates liability even without discriminatory intent.
Niloy Ray of Littler called this "a concerning shift" and "a significant departure" from established employment law. Translation: Texas just legalized algorithmic discrimination as long as you don't admit that was your intention.
Ray's advice for employers navigating this chaos: "comply with the HCF or highest common factor when setting up AI disclosure, risk-assessment, opt-out, appeal and record-retention processes."
This sounds reasonable until you consider what it means operationally. A company with employees in California, Colorado, Illinois, New York, and Texas must implement processes that satisfy the most stringent requirements from all jurisdictions—even for employees in states with minimal regulation. You can't run separate AI hiring systems for different states. You build to the highest standard or face liability somewhere.
But Texas's approach contradicts other states fundamentally. How do you simultaneously comply with Colorado's requirement for bias mitigation and Texas's position that disparate impact doesn't demonstrate discrimination? You can't satisfy both frameworks coherently. They rest on incompatible legal theories about what algorithmic fairness requires.
The result is compliance theater: implementing processes that technically satisfy regulatory language while knowing the underlying tensions remain unresolved. HR departments document risk assessments, provide disclosures, and maintain records—not because these activities prevent discrimination, but because they create legal cover when discrimination occurs.
"A federal framework that preempts the state-level patchwork would be ideal, but appears unlikely," Ray noted.
That's because federal AI policy under Trump aims toward deregulation, not standardization. Recent White House efforts focus on making the U.S. a "global AI leader" through reduced restrictions—the opposite of comprehensive employment protections. Congressional action remains minimal. A bipartisan bill requiring employers to report AI-related layoffs represents the extent of federal legislative momentum.
Meanwhile, Republicans are actively pursuing federal preemption to block state AI regulations entirely—not to replace them with better federal standards, but to prevent regulation altogether. If successful, this would eliminate California, Colorado, and Illinois protections while leaving employers with no framework at all.
The regulatory uncertainty isn't temporary turbulence before clear rules emerge. It's the new permanent state. Employers must navigate contradictory state requirements while federal policy signals that regulations are obstacles to innovation rather than protections for workers.
Betts explained that many employers are "setting up internal governance programs and strategies that make sense for their organization." Variables to consider include company size, industry, how employees use AI, operational locations, and risk tolerance.
But this framing makes regulatory compliance sound like strategic choice rather than legal obligation. You don't get to decide your risk tolerance for employment discrimination based on whether it "makes sense for your organization." The law sets standards. Compliance isn't optional.
What "internal governance that makes sense" often means in practice: implementing enough process to demonstrate good-faith effort while maintaining flexibility to use AI in ways that might not withstand legal scrutiny. Document everything. Train everyone. Conduct audits. Then deploy AI tools that automate decisions in ways you can't fully explain or control, hoping documentation provides sufficient liability protection.
The ongoing collective action lawsuit against Workday demonstrates where this leads. Plaintiffs allege Workday's AI hiring tools discriminate based on age and other protected characteristics. Workday presumably implemented internal governance, conducted risk assessments, and documented processes. Yet the lawsuit proceeds because governance procedures don't prevent algorithmic discrimination—they just create paper trails.
As AI adoption in HR accelerates—recruiting, compensation, performance management—the Workday case previews litigation to come. Every employer using AI for employment decisions faces similar exposure. The question isn't whether your governance framework prevents discrimination. It's whether your documentation protects you when discrimination allegations arise.
Ray's advice for employers is "resolute pragmatism: limit AI deployment to high ROI uses, budget for compliance, and discern when to be the early bird and when the second mouse."
This sounds measured until you parse it. "Limit deployment to high ROI uses" means use AI where cost savings justify compliance risk—typically high-volume, low-stakes decisions like resume screening. But these are exactly the contexts where algorithmic bias affects the most people. "High ROI" for employers often means "maximum discrimination impact" for applicants.
"Discern when to be the early bird and when the second mouse" acknowledges that early AI adoption carries liability risk as legal frameworks crystallize. But this advice only works if you can wait. Companies facing competitive pressure to adopt AI recruiting tools can't afford to be "second mouse." They deploy now and hope legal exposure remains manageable.
Federal policy says AI regulation hinders innovation and global competitiveness. State laws say AI employment decisions require transparency, bias audits, and worker protections. Texas says disparate impact doesn't demonstrate discrimination. Colorado says it does. Illinois requires disclosure. Federal Republicans want to preempt disclosure requirements.
These positions aren't reconcilable through better "internal governance" or more thoughtful implementation. They reflect fundamental disagreement about whether algorithmic employment decisions require regulatory oversight or should proceed with minimal constraint.
For HR departments navigating this chaos, the real lesson is that compliance has become a moving target with no stable endpoint—and the companies bearing the burden of reconciling contradictory frameworks are the same ones deploying AI tools that create the problems regulations attempt to address. At Winsome Marketing, we help organizations understand that regulatory fragmentation isn't temporary confusion before clarity emerges—it's the permanent condition when technology advances faster than democratic institutions can govern it coherently. Sometimes the most pragmatic choice is recognizing that "compliant AI deployment" might be contradiction in terms.
Here's a fun drinking game for tech leadership theater: take a shot every time a CEO announces an "AI-first transformation" while simultaneously...
1 min read
Tech giants have discovered the perfect regulatory loophole: instead of buying AI startups outright, they're cherry-picking founders and top...
Target just announced they're bringing shopping to ChatGPT.