Starbucks Deploys AI Tablets to Count Inventory at 11,000 Stores
There's something beautifully absurd about using "3D spatial intelligence, computer vision, and augmented reality" to count how many bottles of...
4 min read
Writing Team
:
Feb 13, 2026 8:00:00 AM
For nearly a century, a piece of limestone sat in a Dutch museum, covered in deliberate grooves that clearly represented something—but no one could figure out what. Human archaeologists stared at it, theorized about it, cataloged it, and ultimately filed it away as an unsolvable curiosity from Roman times. Then AI ran 100,000 simulations in the time it would take you to finish your morning coffee, and suddenly we understand how Romans entertained themselves 2,000 years ago.
This is what AI is actually good for—and it's a story worth celebrating.
The limestone board, just 20 centimeters across, was discovered in Heerlen, Netherlands, built atop the ruins of the Roman town of Coriovallum. The grooves carved into its surface were clearly intentional, the wear patterns suggested repeated use, but there were no written records describing the game. No instruction manual buried alongside it. No helpful Roman graffiti saying "here's how you play."
Traditional archaeological methods hit a wall. You can carbon-date the limestone, analyze the tool marks that created the grooves, study similar artifacts from the period—but none of that tells you the rules. How many pieces did each player control? What constituted a legal move? How did you win?
For 100 years, this artifact existed in a peculiar limbo: clearly significant, completely incomprehensible.
Enter AI, specifically the Ludii game system developed by Leiden University archaeologist Walter Crist and his colleagues. Published in the February 2026 issue of Antiquity, their approach was elegantly simple: if we can't figure out the rules by thinking about them, let's have virtual players try every possible combination until we find the ruleset that produces the exact wear patterns we see on the physical board.
Human researchers are constrained by time, imagination, and cognitive bias. You might have a theory about how the game worked based on similar games from later periods, and that theory shapes what you look for in the evidence. You test maybe a dozen rule combinations manually, realize it would take years to test thousands more, and eventually move on to projects that seem more tractable.
AI doesn't get bored. It doesn't have pet theories it wants to prove. It just runs simulations—systematically testing three pieces versus two, four versus two, two against two, different movement patterns, different win conditions. According to the research published in Antiquity, the Ludii system evaluated over 100 distinct rule configurations, each played thousands of times by virtual opponents, mapping which combinations would create wear patterns matching the physical artifact.
The result: Ludus Coriovalli, or the "Coriovallum Game," a blocking game where one player positioned four pieces against an opponent's two, with victory going to whoever avoided being blocked the longest. The game is now playable online against a computer opponent, which means a game lost to history for nearly two millennia is now accessible to anyone curious enough to try it.
Archaeologist Véronique Dasen of the University of Fribourg, who leads the Locus Ludi project studying ancient Roman and Greek games, called the research "groundbreaking" and noted it could revolutionize how archaeologists investigate other lost games. She told Antiquity that the findings "invite [archaeologists] to reconsider the identification of Roman period graffiti that could be actual boards for a similar game not present in texts."
This is AI doing what it's genuinely best at: exploring massive solution spaces that would take humans lifetimes to navigate manually. Not replacing archaeological expertise—Crist and his team still needed deep knowledge of Roman culture, game theory, and material analysis to frame the problem correctly—but augmenting that expertise with computational brute force.
The discovery also rewrites assumptions about game history. Blocking games like this weren't thought to have existed in Europe until the Middle Ages, according to Crist. Now we know Romans played them at least 1,500 years earlier than previously believed. Go and Dominoes are modern blocking games, but Ludus Coriovalli doesn't resemble either, suggesting an independent evolutionary path for this game mechanic.
University of North Florida anthropologist Jacqueline Meier, commenting on the research in Antiquity, noted that "if more were known about the board's context and potential game pieces, more interpretations could be made of how it functioned in past social life." That's the exciting part—this methodology opens doors to understanding not just what ancient people played, but why, and what those games reveal about social hierarchies, leisure time, and cultural exchange.
As Dasen observed, "Games can go on for centuries, and sometimes they appear and then disappear." AI just helped us find one that disappeared for two millennia.
There's a version of this story where archaeologists rush to apply AI to every unsolved mystery, generate nonsense results, and claim breakthroughs that don't withstand scrutiny. That's not what happened here. Crist and his colleagues used AI specifically because the problem—reverse-engineering game rules from wear patterns—required testing more combinations than human researchers could feasibly evaluate.
The AI didn't "understand" Roman culture. It didn't have insights about game design or social context. It just did math very quickly, testing which rule configurations would produce the observed physical evidence. The human researchers still had to interpret those results, validate them against broader archaeological knowledge, and determine which findings were actually significant.
This is the unglamorous reality of useful AI: it's a power tool, not a replacement for expertise. You still need to know what you're building and how to evaluate whether the tool is giving you good results. But when you have the right problem—one that requires exhaustive search through a defined solution space—AI can accomplish in hours what would take human researchers decades.
For marketers and business leaders, there's a lesson here about AI deployment: the wins come from identifying specific, well-defined problems where computational exploration genuinely helps, not from sprinkling AI on everything and hoping for breakthroughs. Crist didn't use AI to "disrupt archaeology" or "transform how we study the past." He used it to test 100,000 game variations because that's the specific thing humans couldn't do efficiently.
The immediate implication is that museums worldwide likely have similar artifacts—carved stones, clay tablets, ancient graffiti—that could be decoded using this same technique. Dasen specifically mentioned reconsidering Roman graffiti that might represent game boards for variants not documented in texts. How many "decorative patterns" in archaeological collections are actually games whose rules we can now recover?
But the broader implication is about what happens when you give researchers tools that change what's possible. A century-old mystery got solved not because archaeologists suddenly got smarter, but because they finally had technology that could test hypotheses at the speed required. That changes which questions are worth asking.
AI-assisted archaeological discoveries increased between 2023-2025, with applications ranging from satellite-based site detection to fragmentary text reconstruction. We're entering a period where problems previously filed as "interesting but unsolvable" are becoming tractable.
Not every problem is a nail waiting for the AI hammer. But for the ones that are—the ones requiring exhaustive search, pattern recognition at scale, or simulation of thousands of scenarios—we now have tools our predecessors could only dream about. The Romans who played Ludus Coriovalli in Coriovallum never imagined their game would be reconstructed by virtual players running through 100,000 possibilities two millennia later. That's the kind of continuity between past and future that makes AI feel less like disruption and more like discovery.
Finding the right AI applications for your specific challenges requires understanding both the technology's genuine capabilities and your actual needs. Winsome Marketing's growth experts help organizations identify where AI creates measurable value versus where traditional approaches work better. Explore our marketing technology consulting to separate AI opportunities from expensive distractions.
There's something beautifully absurd about using "3D spatial intelligence, computer vision, and augmented reality" to count how many bottles of...
Amazon just deployed its millionth robot, and frankly, we should all be celebrating. Not because we're witnessing the rise of our mechanical...
The death knell for tech job security just rang, and it came from an unexpected source. TCS—Tata Consultancy Services—has announced the elimination...